►
Description
Join Ricardo, and maybe some surprise xmas guests (Trevor from Avi, Kal from MSFT).... for a special Antrea-LIVE episode where we learn about what its like to live on the "user end" of the ongoing changes in the upstream Ingress community (Contour, NGINX, Avi, ...) , and talk about the future of Cloud Native Loadbalancing !
A
Okay,
we're
live,
hey
everybody,
okay,
it's
the
andrea
show,
cal
is
here
and
ricardo's
here
and
trevor
is
here.
So
can
you
introduce
yourself
please
and
say
hi
to
everybody,
hi
everybody?
Why
don't
you
go
first
cal
since
you've
never
been
on
the
show
before.
B
B
My
name
is
cal.
Obviously
I
I
work
for
microsoft.
I
do
a
lot
of
the
kubernetes
containers
networking
linux
for
for
for
microsoft,
I'm
also,
I
contribute
to
a
kubernetes
significantly,
especially
the
networking
piece
and.
A
C
Yeah,
hey
everybody.
My
name
is
trevor
spyers.
I
am
a
load
balancing
and
ingress
specialist
here
at
vmware
focused
on
the
avian
ako
solution.
So
I
I
I
don't
have
any
feats
of
strength
like
like
cal
to
show
off
as
far
as
contribution
goes,
but
you
can
speak
about
avi
and
ako
a
little
bit
anyway.
A
Pear
is
here,
cool
and
pear.
Is
here
too
now
trevor?
I
think
trevor
I've
seen
some
of
your
other
you've
done
a
few
live
streams
that
I've
seen
that
were,
I
think,
pretty
educational
in
the
past.
So
I'm
trying
to
convince
trevor
to
maybe
help
me
host
the
show.
What
do
you
all
think
you
think
trevor
should?
Why
don't
we
see
if
he
has
fun
today,
if
everybody's
really
nice
to
him,
I
might
get
a
co-host
so
be
really
nice
to
trevor.
A
Don't
make
fun
of
him
when
he
tries
to
show
us
stuff
and
then
pair
pair
peters
peterson
is
here
he's
a
new
member
of
the
kpg
community,
he's
friends
with
some
of
the
folks
from
columb.
So
it's
good
everybody's
showing
up
hi
jun,
jen
june
jen's
here
steve
sloka
is
here
steve?
If
you
want
to
talk
about
anything
contour,
just
let
me
know
like
I
can
patch
you
in
or
whatever
or
if
not.
If
you
want
to
just
put
some
links
in
the
show
notes,
you
could
do
that
too.
A
D
Yeah,
hey
so
I'm
ricardo.
I
work
at
vmware
as
well.
I
am
not
from
the
modern
application
team
or
networking
team
at
jay
or
trevor,
I'm
just
doing
some
other
downstream
stuff.
But
on
my
spare
time
I
help
some
folks
maintaining
increasing
gynex
and
usually
I
break
more
things
that
I
fix
them.
But
that's
the
story
of
my
life.
A
D
So
we've
got
a
lot
of
contributors
on
tests.
I
was
one
of
them
when
alejandro
was
maintaining.
That
and
alejandro
is
a
really
nice
person.
I
I
a
lot
with
him
and
when
he
decided
to
step
down,
we
tried
to
make
a
community
around
ingredient
ginx,
because
it's
I
mean
it's,
it's
an
important
project
for
for
kubernetes
was
the
first,
the
first
ingress
controller.
D
So
we
assembled
a
community.
We
have
today
two
two
maintainers
which
have
some
approval
permissions,
which
is
me
and
james.
We've.
We've
got
a
lot
of
folks
from
from
a
lot
of
areas
like
we
have
carlos
panato,
which
is
from
siegrilli's
helping
us
with
helm,
shards
and
alvin.
Alvin
is
also
one
of
the
maintainers.
D
With
approval
permission,
he
wrote
the
the
whole
dynamic
configuration
in
engine
open,
rusty.
We've
got
jintao,
long,
noah,
a
lot
of
folks
working
on
on
a
bunch
of
areas.
So
one
thing
that
we've
tried
to
actually
together
is
people
not
only
willing
to
contribute
with
coding,
because
not
everybody
knows
how
to
how
to
developing
goal,
but
also
people
that
knows
the
operational
issues
of
english
and
gynec.
So
we
have
people
working
on
the
documentation,
people
working
on
the
health
charts.
I
would
say
that
we
we
usually
weekly.
D
E
So
what
yeah
I
do.
B
Pays
my
views
right.
I
I
have
a
question
for
you.
Both
any
of
you
started
using
the
gateway
api.
Have
you
tried
the
alpha
2?
I
think
alpha
2
was
out
like
few
weeks
back.
Have
you
tried
to
to
work
with
it.
D
So
I
know
that
steve
steve
made
a
demo
of
gateway
api
and
contour
in
one
of
the
tjk
was
really
cool.
We
are
not
working
with
gateway
api
as
well,
because
we've
been
in
english
in
genetics
because
we've
been
stuck
in
all
of
the
cves
that
you
probably
saw
on
the
past
right.
So
we've
been
mostly
doing
some
firefighting,
but
gentile
gentile
actually
started
working
on
the
gateway
api
layer
in
english
in
giants.
So
we
are
mostly
our
idea
is
to.
D
While
we
are
trying
to
split
ingress
in
gynex
the
control
playing
the
data
plane,
we
want
to
change
the
control
plane
in
a
way
that
it
can
file
gateway
api
or
it
can
talk,
ingress
ingress
objects,
so
it
would
be
easier
yeah,
so
steve
is
telling
that
contour
already
have
gateway
support
and
it's
pretty
nice
yeah
steve
is
as
hopping
as
one
upping.
You
like.
A
Vivek
is
whenever
I
unders
so
vivek
was
just
telling
me:
oh
here's,
some
news
if
you've
liked
your
plot,
your
cni
plugins
thing
merged
today,
right,
where
is
it
where's,
your
cni
plug-ins
thing,
cni,
you're,
malta's
stuff
right,
your
dhcp
thing,
if
you
think
he's
been
jumping
up
and
down
about
this
just
merged.
I
think
today
it's
this
dhcp
thing
for
for
cni,
so
you
can
have
different.
I
guess
it
enables
like
the
telco
stuff,
where
you
want
to
have
different
dhcps
on
different
mix
and
stuff,
like
that.
A
So
vivek
showed
me
that
today
and
he's
really
excited
about
it.
Now
he
wants
it
in
tanzania
tanzu!
That's
how
I
say,
because
I
was
born
in
oklahoma,
okay,
so
who's
next.
What
what
who
who's
who's?
Next?
So,
okay!
So
trevor
you
have
a
short
demo
and
ricardo
you've
got
a
lot
of
stuff
trevor.
Do
you
want
to
go
first
or
next,
or
do
you
want
to
fight
over
it.
C
I
don't
I
don't
want
to
fight
ricardo,
he
looks
pretty
tough.
I
I
I'm
happy
to
knock
mine
out
quick.
A
C
And
yeah
on
that,
on
that
gateway
discussion
I
v
vivec's
calling
it
out
to.
I
think
I
think
we
have
environment
correct
me
here,
because
I
haven't
seen
the
latest.
I
think
we
have
limited
support
for
gateway
spec.
Now
last
I
checked
it
was
layer
four
only
so
I
guess
we're
somewhere
between
where
engine
x
is
and
contour
is,
but
I'm
I'm
very
excited
to
see
where,
where
that
goes,
and
actually
I
haven't
I've,
seen
a
lot
of
customers
experimenting
with
it.
C
I
haven't
seen
anything
adopted
yet,
but
would
be
definitely
curious
to
know
like
does
anybody
know
anybody
using
it
like
in
in
real
life,
yet.
C
That
there
are
there's
there's
there
are
interesting
things
running
in
some
very
cutting-edge
production
environments.
I
suppose.
A
C
But
but
yeah
I
I
wouldn't
put
it
in
production,
that's
for
sure
not.
B
Yet
at
least
at
least,
if
you're,
if
you're,
not
elbow
deep
in
and
don't
don't
put
it
in
product,
my
interest
is
actually
the
the
inverse
of
it.
The
invert
I
wanna
know
how
people
are
finding
it.
I
don't
know
if
there
are
any
shortages,
any
something
we
missed
and
that's
what
you
really
look
for
in
alpha
alpha
stage
right
like
have
we
missed
something?
Have
we
screwed
up
anything?
Have
we,
I
don't
know,
did
something
incorrectly,
so
yeah.
A
So
then,
do
you
who's?
Oh
alpha
sight.
I
know
who
this
is.
This
nishad
is
here
who
shot
is
watching?
Okay,
so
what's
up
nishad,
so
sean's
been
fighting
aws
all
day,
so
I'm
thinking
ricardo
trevor.
If
your
thing
is
quick,
I
think
maybe
you
go
first
and
then
ricardo
can
show
us
some
stuff.
That's
what
I
have,
but
I
bet
you
anything
when
you
try
to
share
your
screen.
It
won't
work
and
then
so
ricardo
will
go.
While
you
try
to
figure
that
out.
A
C
C
Out
awesome,
glb
cool,
all
right,
glb,
yeah,
there's
there's
just
a
quick,
quick
thing:
it's
actually
not
not
the
fun
layer,
seven
data
plane
stuff
that
folks
playing
with
ingress,
probably
know
and
love
today,
but
but
something
unique
that
we
offer
with
avi
in
in
our
kubernetes
operators
is
gslb,
and
so
I
wanted
to
show
you
all
just
quickly
kind
of
what
that
what
that
solution
looks
like
do
a
do,
a
failover,
because
it's
fun
to
to
kill
things
and
and
also
give
you
all
a
resource
for
anybody
that,
like
wanted
to
like
learn
more
about
it.
C
I
I
have
this
demo
in
actually
a
hands-on
lab
that
we
just
released
that
I'm
I'm
pointing
a
lot
of
people
to
so
at
a
high
level.
If,
if
you
haven't
worked
on
global
load
balancing
before,
particularly
within
like
a
kubernetes
environment,
all
it
really
is
is
an
intelligent
way
for
us
to
respond
to
an
fqdn
that
way
we
can
route
users
to
different
clusters
or
different
ips.
If
you
will
so
there's
a
number
of
use
cases
mostly
centered
around,
like
resiliency,
so
active
active.
C
That's
kind
of
the
common
nomenclature
for
it.
Some
people
still
call
it
glb,
but
there's
a
lot
of
different
glb
providers
out
there
too
in
the
world
we're
we're
just
one
of
them
and
one
of
the
few
that
can
be
consumed
via
a
crd
in
kubernetes,
which
is
which
is
pretty
cool.
So
with
that
I'm
going
to
I'll,
send
I'll,
send
the
link
to
this
lab
in
case
anybody
wanted
to
play
with
glb
or
like
see
some
of
the
code
for
how
you
would
actually
deploy
this,
but
just
to.
C
Yeah
anybody-
anybody
can
do
this,
so
I'm
actually
I
I'll
post.
A
C
C
Okay,
cool
I'll
do
I'll
do
that
right
now,
so
I'll
do
that
after
I
get
this
demo,
but
yeah.
A
C
So
what
I've
got
here
is
actually
an
app
where
I'm
resolving
a
dns
to
a
vip
in
in
my
my
primary
cluster
right
now.
So
if
I
go
back
to
this
diagram,
I'm
resolving
dns
for
an
app
hosted
in
one
cluster,
but
this
cluster
could
be
you
know
we
could
have
two
clusters
in
different.
You
know
azs,
and
that
could
be
the
traditional
sense
in
the
cloud
or,
like
you
know,
maybe
you
have
a
data
center
with
different
ip
schemas,
that
you
consider
azs
a
different
regions
or
completely
different
infrastructure
right.
C
But
the
point
is
we
have
two
different
clusters
and
right
now,
when
I
do
a
a
dig
or
an
ns
look
up
on
is
if
this
apps
fqdn
that
I
have
hosting
gslb,
I
am
getting
a
response
of
this
ip
right
so
and
I
can
it's
it's
just
serving
a
little
avi
networks
webpage.
I
can
come
in
and
keep
refreshing
it
right,
but
what
I'd
like
to
show
you
is
is
not
just
that
we
have
that,
but
how
quickly
in
in
this
instance,
we
can
fail
failover.
This
might
be
for
maintenance.
C
This
might
be
if
if
an
app
goes
down
and
stops
responding,
so
what
I'll
do
is
I
have
this
avi.
A
C
A
C
Yeah
cloudfront
yeah
it
I
see
what
you're
saying
yeah.
I
I'm
not
familiar
with
cloudfront
doing
this,
I'm
I'm
sure
that
they,
they
have
a
lot
of
similar
ways
of
integrating
gslb,
so
yeah
that
that's
very
likely
the
case
for
sure.
The
the
difference,
though,
being
cloud
front,
is
an
amazon
service
where
this
is
a
service
that
can
sit
anywhere
right.
So
I
have
a
customer
right
now.
I've
been
working
with
using
this
to
run
apps
in
azure
and
aws
simultaneously.
C
So
anyway,
let
me
kill
this
app
real
quick,
so
I
can
show
you
the
failover,
and
I
have
my
health
checks
tuned
really
aggressively.
So
if
I
turn
this
off,
the
failover
has
been
taking
a
few
seconds
in
my
lab.
Let
me
do
a
quick
refresh
here,
so
I
it
looks
like
the
failover
happened
that
quickly,
where
I
am
now
being
routed
to
a
totally
different
cluster.
So
if
I
do
ns
look
up,
I
should
see
that
now
it's
switched.
C
C
And
all
that's
automated
too,
that
that's
what's
great,
is
actually
the
the
way
it's
consumed
is
just
via
a
selector
that
you
put
on
a
pod.
So
I,
if
I
put
gslb
as
a
selector
on
a
pod
and
I'm
catching
that
in
the
in
the
configuration
you
can
scale
those
up
and
down,
deploy
additional
clusters
and
have
the
service
provided
across
them,
which
is
which
is
pretty
cool,
not
not
really
ingress.
But
it's
certainly
an
important
traffic
control
mechanism.
C
That
can
add
some
some
resiliency
to
your
environment,
and
you
could
leverage
this
if
you're,
using
avi
as
your
ingress
or
really
any
any
lb
or
ingress
we're
compatible
with
there.
A
A
B
Yeah
and
also
every
cloud
does
it
slightly
different,
but
it
falls
down
two
buckets:
either:
a
dns
lookup
like
the
one
trevor
just
described
or
an
any
cast
ip.
B
Everyone
has
its
own
opinion
about
how
dns
should
work
any
cast
ip.
Usually
it's
better
when
it's
especially
when
you
don't
want
to
rely
on
dns
those
posts,
both
options
are,
will
give
you
the
end
result
that
you
just
looked
at
just
they
operate
differently,
and
everyone
has
a
sharp
edge
of
some
sort.
B
C
C
And
I
think
I
think
google
load
balancers
that
that's
like
their
default
setting
as
an
example,
but
sometimes
you
might
want
for
like
a
little
more
granular
control
over
the
failover
other
than
just
ip
yeah.
So
dns
can
become
really
useful
there,
because
we
can.
We
can
make
it
more
selective
right,
based
on
where,
where
the
request
is
coming
from,
we
can
respond
with
different
ips
and,
of
course,
yeah
that
that
that's
not
limited
to
avi
right
amazon
uses
route53.
I
think
they
even
have
some.
C
I
believe
lambda
does
something
like
this
now
as
well,
so
it
exists
across
across
cloud
providers
for
sure
I
I.
A
C
Has
a
limited,
so
I
mean
all
you
can
actually
do
gslb
with
like
active
directory
dns.
All
you
all
you
need
is
two
records.
A
C
D
B
C
Yes,
yeah,
I
think
my
point
was
a
basic
quote-unquote.
Gslb
is
just
a
a
dns
record
that
resolves
to
two
different
ips
and
so
pretty
much
anything
can
round
rob
into
that.
That's
out
there,
but
yeah
would
would
not
recommend
using
windows
dns
for
gslv
for
sure.
No,
no,
definitely
not
ad.
Yes,
yeah.
Okay,.
A
E
D
Look
for
j,
I
don't
think
I
can't.
I
can't
talk
anything
about
log4j,
but
anyway
I
yeah.
I
sorry.
I
I
think
that
besides
look
for
jay,
we've
been
fighting
the
first.
The
first
thing
that
I
think
it's
that
we've
been
seeing
a
lot
of
people
ranting
about
cves
lately
in
those
days
and
forgetting
that
the
majority
of
the
main
seniors
they
are
not
paid
for
that
right.
So
I've
seen
some
folks
telling
hey.
D
You
should
not
be
in
using
java,
because
log4j
I
mean,
if
sometimes,
if
somehow
something
appears
in
goal,
I
don't
think
it
would,
but
if
something
appears
and
go
the
same
way
like
in
log
rules
or
in
cobra
people
shouldn't
be
talking
about
that.
That
way,
but
I
won't,
I
won't
say
nothing
about
log4j.
I
I
think
that
maybe
you
wanna
talk
about
this.
The
recent
cve
is
in
in
china.
Xyj
you
are,
you
are
just
looking.
B
At
I,
actually,
I
actually
want
to
make
an
interesting
point
here.
Go
ahead,
so
I've
been
watching
people
reaction
to
this,
and
some
people
were,
let's
just
say
less
than
the
light,
with
the
reflection
on
the
bug
and
the
echo
and
so
on.
The
issue
is
big:
glacier
is
not
small
nobody's
denying
that
right,
but
somehow
you
find
somebody
complaining
and
saying
hey.
B
Oh
this
thing,
just
bad
thing:
that's
bad
design
and
bad
whatever
and
bad
whatever
I'm
bad,
whatever
I'm
better
than
makes
that
really
interesting
question.
Why
are
you
using
it
if
it's
that
bad,
I'm
serious
all
right?
One
of
the
things
that
I
always
tell
people
about
open
source
and
don't
get
me
wrong.
B
Open
source
is
one
of
my
favorite
things
right
to
be
engaged
in
other
code,
the
community
and
everything
one
of
the
things
people
don't
realize
is
the
fact
that
you're
using
open
source,
irrespective
of
the
licensing
discussion,
because
that's
not
a
discussion,
I'm
qualified
to
have
the
idea
is
yes,
you
delegated
somehow
they
build
and
test
and
compile
to
somebody
else
right.
Somebody
else
like
jay
built
the
thing
and
I'm
using
it.
B
So
if
you're
complaining
about
java
or
complaining,
I
don't
like
java,
don't
get
me
wrong,
I'm
not
a
java
person
right.
I
have
my
own
set
of
problems,
but
java
is
a
good
language
right.
The
fact
that
I
don't
like
it
does
not
mean
it's
not
it's
not
good.
It's
just
the
fact
that
you
use
java
and
then
you
stop
and
complain.
B
Then
one
of
two
things:
either
you
change
the
tool
you're
using
or
you
just
help,
help
improving
it
by
just
sitting
down
and
even
contributing
to
the
people,
and
one
of
the
things
that's
really
interesting
is
the
people
who
build
loc4g
came
out.
The
first
thing
they
did
is
we
didn't.
We
did
not
like
this
feature.
B
We
had
it
for
backward
compatibility
because
if
you
come
and
ask
anybody
like
me
between
keeping
a
bad
feature
on
or
breaking
compatibility
will
always
say,
keep
the
bad
feature,
because
we
don't
want
people
to
recompile
and
rebuild
their
stuff
against
ourselves.
We
go
through
this
discussion
almost
every
time
we
think
about
kubernetes,
like
the
api
flag,
every
field,
but
we
don't
really.
We
have
to
think
this
way
like
just
because
of
that
feature
or
something
that
was,
and
you
see
the
ashes
right,
something
that
was
badly
implemented.
B
D
A
You
can
use,
do
people
still
use
slf
for
jay,
because
that
was
that
was
what
I
thought
was
the
solution
a
long
time
ago
is
make
it
so
you
don't
have
to
so
you're,
not
tied
to
any
particular
logger.
D
But
I
I
think
that
the
point
that
matters-
what
call
is
saying
is
that
someone
decided
to
use
that
right,
because
that
was
the
right
library
and
you
are.
You-
are
delegating
all
of
the
maintainance
all
of
the
security
audit
to
the
open
source
community.
D
So,
regardless
of,
if
you
are
using
sla4j
or
log4j
or
if
you
are
using
system.out.print
something
you
are
still
relying
on
someone
to
maintain
that
yeah
right
so
and
and
you,
and
instead
of
just
keep
complaining
about
that,
people
should
be
taking
a
better
look
into
like
into
the
security
or
making
some
contributions.
D
So
if
some
vulnerability
appears,
I
shouldn't
be
like
just
ranting
about
the
developers,
the
maintainers
over
the
internet
and
maybe
helping
them
to
fix
that
or
maybe
following
closer
because
the
thing
with
lock4g,
for
example,
you
have
a
lot
of
banks,
a
lot
of
companies
that
have
like
a
billion
dollars
or
more,
and
they
don't
put
any
money
on
that.
For
example,
that's
right
so.
A
Well,
I
have
a
question
about
the
nginx
ones.
I
agree
with
both
of
you
be
nice,
but
I
have
a
question
how
many
of
the
nginx
cvs
are
actual
engine
x1
versus
nginx
ingress
ones,
and
then
I
had
the
same
question
for
the
contour
folks,
if
they're
watching,
so
how
many
of
the
contour
cvs
that
we've
seen
have
been
because
of
envoy
versus
because
of
contour
itself.
D
So,
first
of
all,
I
in
case
of
engine
x-
usually
they
split
right.
So
we've
got
a
bet
in
china,
x,
venerability,
a
bad
city
with
resolvers
back
in
fine.
I
guess
like
it's
like
six
or
seven
months
old
and
the
patch
was
like
hey.
You
need
to
update
you're
in
gynex
from
119
to
120
or
apply
this
patch.
So
this
is
not
an
increase
in
gianx
venerability,
because
that's
not
the
way
that
nginx
uses
in
gynex,
but
that's
like
a
vulnerability
from
from
nginx
itself.
D
It's
the
same
thing
as
you
say:
hey,
like
my
jboss,
got
this
specific
cve
because
it
uses
log4j
inside
the
framework
or
our.
D
No,
we
yeah,
so
we've
got
controller
cds
and
like
and
the
controller
cvs
they
are
related
actually
to
how
ingress
how
ingress
controller
deals
with
the
way
that
users,
they
add
informations.
So
as
an
example
in
gynex
by
itself,
it
doesn't
have
a
a
cve
related
to
remote
code
execution
on
a
specific
place.
Okay
and
lua
doesn't
have
that
as
well,
but
the
way
that
we
implemented
that
in
ingress
in
gynex
allows
some
folks
to
in
an
annotation,
put
some
the
directive
of
lua
that
can
allow
people
to
read
kubernetes
secrets
right.
D
So,
if,
if
you,
if
you
look
into,
if
you
look
into
the
the
the
the
core
reason,
is
the
way
that
we
implemented
that
that's
not
the
way
that
in
chinex
implemented
things
or
the
way
that
or
the
way
that
the
way
that
that
that's
available
in
in
open
rest,
for
example.
This
is
the
way
that
we
like.
We
are
not
sanitizing
users
in
input.
We
are
not
breaking
blocking
users
of
use
like
dangerous
directives
from
openrest.
So
this
is
an
ingress
controller
vulnerability.
A
Yeah,
exactly
that
makes
sense,
yeah.
Definitely
that
makes
sense.
Okay,
so
and
then
steve
is
saying
steve
is
saying
he
can
jump
on
steve.
Definitely
I
gave
you
the
link
if
you
want
to
jump
on
I'd
love
for
you
to
explain
some.
So
you
introduced
steve,
introduced
two
contour
cds
and
I
guess
he
said
he
can
explain
them
so
like
what's
the
most
recent
one
ricardo,
what's
the
most
recent
one
that
you
that
you
all
had
to
patch
on
the
rest
side.
G
D
So
the
most
most
recent
one
is
still
that
one
that
we've
been
we've
been
dealing
with.
D
We
we've
got
some
some
some
diverse,
some
other
some
other
problems
from
the
tv
that
weren't
patched
and
we
are
still
patching
and
it's
gonna
probably
be
announced.
But
there
is
some
discussion
on
that
right.
D
D
D
D
That
does
have
a
lot
of
secrets
and
a
lot
of
stuff
because
all
of
the
tls
teams
they
are
on
on
kubernetes
api
right,
so
that
one
was
the
first
and
then
when,
when
this
one
was
disclosure
people,
people
figured
out
that
they
could
try
to
abuse
other
annotations
to
say
hey
what?
If
I
try
to
add
these
into
this
annotation
that
fix
headers,
for
example,
when
instead
of
headers,
I
want
to
escape
and
use
something
else.
So
this
one
is
still
we
we
already
patch
it
in
a
sense
of
saying,
hey.
D
You
now
can
sanitize
all
of
your
annotations,
but
we
are
still
working
on
that,
because
the
way
that
ingress
works
and
the
amount
of
access
that
ingress
has
to
kubernetes
api
today
is
it's
dangerous
to
keep
this
both
both
of
those
things
together
right
having
the
data
plane
and
the
control
plane
running
on
the
same
company.
A
D
E
D
Yeah,
no,
that's
that
that's
one
of
the
discussions
that
we've
been
making
is
like,
when
you
add,
when
you
give
your
users
permission
to
create
things
in
kubernetes.
You
must
be
aware
that
those
things
they
may
be
dangerous,
like
you,
are
giving
users
permissions
to
add
arbitrary
code
that
may
break
things
right.
So
when
what
some
someone
came
to
me
and
say,
hey,
if
you
have
multi-layer
ingress,
you
are,
you
are
probably
in
in
a
bad
shape
and
I
said
yeah,
but
this
is
the
way
that
multi-platform.
D
This
is
the
way
that
that
multi
multi-tenancy
clusters
works
right.
I
want
to
create
a
namespace.
I
want
to
give
my
name
space
access
to
steve
and
I
want
to
say,
hey
steve.
I
trust
you
and
I
trust
my
system-
that's
not
going
to
allow
you
to
put
any
arbitrary
code
here.
That's
not
what
was
happening.
Let's.
A
F
A
H
It
no
I.
What
do
you
want
me
to
talk
about
the
cvs
or
what
yeah
yeah
go
to
that
link
I
sent
in
the
chat,
the
the
contour
you
sent
something
to
me.
Well,
it's
in
the
it's
in
the
the
youtube
video.
It's
an
issue
in
contouring,
just
just
go
to
contours,
site,
yeah,
okay
and
go
to
security
on
the
right.
There's
two
security,
there's
a
tab
on
the
top
for
it.
Yeah!
H
Well,
that's
fancy!
So
this
is
a
thing.
Everybody
has
that
I
think
so.
Advisories,
the
second
one
down
yeah.
So
here
are
the
two
that
we've
published.
So
when
you
publish
it
through
github,
you
get
like
cvs
written
for
you
and
everything.
So
the
first
one
was
the
the
external
name
services.
So
contour
lets
you.
You
know
when
you,
when
you
create
an
ingress
resource,
you
reference
a
service
in
kubernetes
and
typically
that's.
H
H
The
problem
here
is
that
you
could
actually
make
your
your
external
name
service,
localhost
and
what
we
were
doing
is
we
were
exposing
envoy's
admin
web
page
over
localhost
only
to
kind
of
like
hide
it
from
the
world,
but
the
problem
with
that
is
that
if
you
made
an
extra
name
service
of
localhost,
then
you
could
then
get
you
know
envoy
would
route
to
localhost
and
it
would
suppose
that
that
admin
webpage
to
anybody
who
configured
it.
Oh.
H
Yeah,
so
it
was
only
there
if
someone
could
set
up
that
way,
but
you
know
and
theory
someone
could
do
that.
So
thank
you,
josh
farrell,
yeah,
so
josh.
I
think
he
was,
I
think
he
joined
vmware
and
a
few
weeks
later
he
found
this
or
something
it
was
really.
You
know
early
in
his
in
his
time
here
at
vmware,
but
anyway.
H
A
clever,
clever
hack
to
you
know,
get
access
to
it,
there's
a
second
one
too,
which
is
related
to
kubernetes
itself
to
where
you
can
get
access
across
namespace
services
as
well.
So
if
you
enter
your
excel
name
service
and
call
it,
you
know,
servicename.namespace.cluster.local.
H
You
can
you
can
get,
you
know,
break
out
of
your
namespace
into
somebody
else's
namespace
and
potentially
access
their
service
or
expose
it.
If
you
know
if
they
didn't
want
that,
you
could
still
do
that.
Okay,.
H
H
Data
plane,
that's
the
one
on
the
left.
The
actual
name
is
the
one
I'm
talking
about
the
okay
and
that
one
is
the
one
we
just
discussed
yeah.
So
what
we
did
was
we
we
mitigated
this
by.
We
got
rid
of
all
so.
The
the
envoy
admin
page
has
things
where
you
can
like
you
can
kill
on
boy.
You
can
reset
stats.
You
can
do
all
kinds
of
kind
of
like
destructive
things.
You
can
shut
it
down.
You
know,
so
we
got
rid
of
all
of
those
endpoints
that
were
basically
right.
H
So
we
we
actually
wrote
a
static
rule
in
an
envoy
to
say:
hey,
expose
the
admin
things
over
localhost,
but
just
expose
the
the
read-only
ones
like
clusters,
endpoints
that
sort
of
thing,
and
then
the
other
ones
to
them
and
then,
instead
of
having
it
being
mounted
over
here,
I'm
over.
A
H
A
H
Well,
you
still
can
is
just
all
the
what
we
did
is
we
moved
it
from
an
http
server
to
a
unix
socket
okay,
so
I
can't
access
it
through
and
then
through
that
socket
we
expose
the
read-only
interfaces,
so
you
can
still
get
debug
information
because
we
use
it
a
lot
to
like
debug
stuff.
Like
hey,
go,
show
me
your
configuration
or
show
them
your
clusters
or
show
your
your
your
routes
or
something
that
are
set
up
in
in
envoy.
A
D
Yeah,
so
one
of
the
cv
isn't
the
best
yeah
yeah,
okay,
sorry,
yeah
girl,
one
one
of
the
cvs
that
we've
got
on
best
was
really
similar
to
this.
One
in
enfoy
right
was
like
users
could
create
a
service
with
with
an
external
name
and
then
point
to
a
specific
ingredient.
Gen
x,
that
runs
in
inside
inside
the
controller,
that's
responsible
to
maintaining
the
back
end
tables
for
the
open
brass
tee.
D
So
when
that
thing
came
out
actually
that
was
that
one
specifically
didn't
generate
a
cve,
but
later
we've
got
a
venerability
that
I
guess
rob
scott
or
someone.
Someone
actually
saw
that
in
that
could
allow
users
to
use
an
external
name,
to
bypass
ingress,
right,
ingress
controller
and
go
directly
to
the
back
ends
and
back
in
that
time.
So
the
kubernetes
security
response
team
actually
pinged.
D
Maintainers-
and
I
think
steve
was
on
that
beast
as
well
and
they
said
hey,
you
should
probably
take
a
look
into
that.
So
before
the
cv
got
announced,
we
all
started
looking
into
a
way
to
disable
external
names
or
to
filter,
to
filter
that
the
external
names
right.
So
when
it's
something
related
to
the
kubernetes
api
and
that's
reported
via
the
kubernetes
api,
usually
all
of
the
ingress
main
fingers
did
at
least
the
majority
of
them.
They
got
worn.
D
So
when
I
got
this
warning,
I
warned
the
folks
from
heavy
here
inside
the
vmware
as
well,
because
they
remember
that
they
were
they
weren't
on
the
east
right.
So
we
try
to,
but
that's
not
something
easy
to
keep
a
fun
thing.
Actually.
Is
that
the
contour
maintainers,
the
ingredients
maintainer,
one
of
them
and
dha
proxy
maintainer
con
ingress
maintainer?
D
A
D
D
Yeah
people
people
usually
use
that
for
for
for
s3
buckets,
for
example,
or
so
for
some
external
web
services
that
they
need
to
connect
like
they
have
their
own
pod
running
inside
the
namespace,
and
they
want
to
use
kubernetes.
D
They
want
to
use
kubernetes
services
instead
of
like
keep
changing
things
in
in
the
code.
They
can
just
call
like
my
mywebservices.mynamespace.
A
D
It
creates
a
c
name
on
car,
dns,
okay
or
whatever
dns
you
are
using,
but
it
reconciles
and
it
creates
like
an
alias
domain.
So
when
you
call
that
it
returns
to
you
a
c
name
like
if
you
put
like
google.com
as
your
external
name,
it's
gonna
say
hey
when
you
call
jsercy.namespace.svc
cluster
local.
I
will
return
to
you
as
a
c
name
to
google.com,
for
example,.
A
A
Chord,
oh
gosh,
I
can't
draw
ricardo
c
name,
okay,
so
the
only
okay,
cool
and
then
so.
The
only
purpose
of
this
external
name
is
to
plumb
records
into
coordinates.
Really
because
I
guess
you
don't
really
need
any,
you
don't
need
any
actual
coupe
proxy
integration
for
this,
because
you're
not
going
you're,
not
proxying
into
the
cluster.
A
D
Yeah,
so
I
I
I'm
not
sure
if
you
ask
me,
like
my
personal
opinion,
I
would
rely
mostly
on
a
crd
to
do
that,
instead
of
putting
that
inside
service,
and
I
think
that
we
we
have
some
lack
of
like
a
dns
as
a
service
based
on
crds
using
coordinates
right
yeah.
But
I
I'm
not
sure
I
don't
know
the
the
history
of
that.
Maybe
we
should
dig
into
like
the
implementation
in
kubernetes
code
and
see
who
implemented
that
and
why.
A
A
Yeah,
so,
oh
so,
ricardo,
why
don't
you
tell
them
about
that
idea?
I
love
that
idea.
So
tell
them
about
this
idea
and
meanwhile
I'll
use
it
as
a
way
to
show
them
some
coping
stuff.
So
we've
talked
about
everything
on
the
show
before
which
splits
the
control
plane
out
from
the
data
plane
and
the
coop
proxy,
but
you're
talking
about
doing
the
same
thing
for
ingress
controllers
right.
I
guess
we
don't
call
them.
We
call
them
gateway
controllers
right.
D
Yeah
yeah
some
something
like
that.
We
are
still
an
ingress
controller
because
we
don't
implement
the
gateway
api
yet
but
yeah.
So
one
thing
that
we've
been
discussing
in
cupping
meetings
past-
I
guess
per
actually
brought
that
idea
as
well
as
how
we
should
probably
be
using
that
idea
of
having
something
that
that
deals
with
all
of
the
kubernetes
objects
and
then
generates
whatever
the
data.
Please
expect
right
so
in
copying.
D
If
you
have
a
service
you,
you
will
generate
something
that
will
be
applied
as
an
nf
tables
or
as
an
ip
tables,
or
something
like
that,
and
while
I
was
discussing
with
the
community
the
ingress
in
genex
community
about
that
we've
been
thinking
like
hey.
D
Why,
instead
of
suffering
with
gateway
api
implementation,
we
don't
think
about
implementing
something
that
can
deal
with
all
of
the
all
of
all
of
the
ingress
or
gateway
api
objects
and
then
generates
whatever
the
data
plane
can
use.
So
this
could
be
used
by
nginx
or
this
could
be
used
by
hd
proxy
or
this
could
be
used
by
contour
or
whatever.
D
By
the
end
of
the
day,
all
of
those
proxies
which
they
are
like
envoy
or
in
general
x-ray
proxy,
they
just
expect
some
some
structure
with
front-end
and
back-ends
and
and
the
configurations
of
those
right
and
then
certificates.
D
Okay,
so
and
that's
the
same
for
all
of
them
right.
So
what
now
say
that
again,
they
they
all
expect
the
same
thing.
If,
if
you
are
dealing
with
layer,
seven
http
routing,
they
expect
the
front
end
and
they
expect
a
back-end.
The
back-end
servers,
the
pods,
the
part
ips
right
yeah,
and
they
just
expect
that
that
structure.
So
I
can
show
you,
for
example,
the
structure
that
we
we
get
in
in
ingress
in
gynex.
D
Let
me
just
move
to
terminal
2
digital
area,
and
I
can
I
can.
I
can
show
you
yeah
yeah
sure
so.
B
A
D
D
Is
the
squid
one
or
this
this
is
screen
two?
This
is
screen
two.
The
screen
try
capturing
a
different
screen.
D
Let
me
see
if
moving
my
okay
jay,
I've
got
the
same
problem
with
mac
that
doesn't
allow
me
to
share
my
screen.
A
D
But
yeah
anyway,
let
me
see
if
I,
if
I
jump
in
with
google
chrome
here,
I
can
do
that.
D
Okay
cool,
so
let
me
share
my
screen
over
here.
Let
me
see
if
this
one
works.
D
D
So
yeah,
so
so
this
is
like
I
am
doing
some
some
implementation
over,
like
like
a
jrpc
client
server
that
fetches
the
whole
configuration
the
data
structure
from
jnx
right.
So,
if
I
run
this,
I
can
see
all
of
my
I.
I
got
a
data
structure
which
is
composed
basically
by
a
bunch
of
front
ends
and
a
bunch
of
back
ends
right.
So
this
is.
This
is
mostly
what
every
what
every
ingress
expects
right.
D
So
I
have
this
if
I
have
the
certificate
errors
and
the
host
name
and
all
of
the
locations
and
what
I
have
in
in
the
backend
locations
and
also
the
the
the
pod
of
the
part
ideas
of
those
back-end
locations
right
so
the
same
way
all
of
the
all
of
the
proxies
they
they
expect
this.
This
sort
of
of
configuration.
D
To
influx
db
in
there,
what
is
that
all
about?
That's
that's
a
configuration
that
we've
got
in
ingress
in
gymnast.
That
allows
you
to
to
add
the
metrics
in
influence.
D
Yeah
yeah
yeah
right,
but
that's
that
that's
something
specific
from
this
front
end
right.
This
is
like
a
a
bunch
of
bikes
that
I
can
later.
I
can
open
that.
I
can
just
marshal
them
right.
The
thing
is
that,
if,
if
you
take
a
look
into
this,
I
have
like
my
front
end
defined
here
with
this
host
name,
and
then
I
have
the
locations
defined
here
and
what
I
should
do
with
those
locations.
So
in
this
case,
for
example,
if
I
have
let
me
see
if
I
can
find
the
path
here.
D
Okay,
if
this
is
at
the
full
back
end?
If
not
so
I
have
this
path
here
and
it
says
that
this
path
should
be
directed
to
a
specific
backend,
a
or
back
end
b,
for
example.
Right
and
also
I
got
all
of
the
ips
the
same
the
same
way
we
do
in
in
captain
getting
the
end
points
yeah.
So
every
every
like
hd
proxy
is
configured
the
same
way.
D
The
the
end
of
aj
proxy
is
like
you
have
a
front
end
that
defines
that
defines
a
net
or
something
like
that
and
then
and
then
you
have
all
of
the
back
ends
that
answers
for
those
those
front
ends
and
void
the
same.
So
I
was
asking
people
hey:
why
don't
we
try
to
make
like
a
a
common
controller
that
generates
the
common,
a
common
data
structure
and
then
via
jrpc
the
same
way?
We
do
we
happiness.
We
just
have
a
bunch
of
back-ends
that
that
can
sync
that
right.
D
That
idea
actually
didn't
got
some
traction,
because
I
I
think
that
all
of
the
implementations
they
got
specific
things,
so
we
will
sell
in
flux
db.
I
have
not
security,
probably
and
void,
doesn't
have
that,
but
that
got
some
idea
for
us
to
maybe
to
the
couple,
at
least
in
english,
in
china,
x,
the
control
plane
and
the
data
plane.
So
that's
what
we've
been
doing
right
now,
so
enginex
is
gonna,
run
on
on
a
non
privileged
controller
on
an
improvised
container.
D
Sorry-
and
I
may
have
like
just
one
control
plane
or
like
a
few
of
them,
targeting
kubernetes
api
server
and
and
all
of
the
data
planes
just
just
fetching,
just
just
just
gathering
the
information
from
that
control
plane
which
may
as
it's
jrpc.
We
may
also
have
like
a
load
balancer
in
front
of
that
which.
A
Is
a
complaint
that
we've
got
this:
is
your
security
footprint
now
right,
because
now
all
your
oh
there's
only
one
thing
that
has
to
talk
to
the
api,
server
and
yeah
right
right
right.
D
And
we
solve
another
problem
that
we've
been
seeing
in
kubernetes
sig
network
meetings
on
on
the
past
few
meetings,
which
is
people
complaining
about
the
high
availability
of
of
api
server
right
yeah,
because
in
this
case
I
will
have
like
my
control
plane
with
grpc
and
the
running
configuration
and
that
control
plane
I
can
like
I.
I
can
replicate
it
to
a
lot
of
a
lot
of
a
lot
of
containers
right.
D
So
I
may
have
like
three,
four
or
five
control
planes
fetching
that
information
and
then
providing
to
the
data
plane
the
structure
that
it
needs
to
configure
the
proxy.
A
Yeah,
so
this
happens
all
the
time
so
now
they're
doing
this
in
the
calico
world.
I
know
jianjin
is
here
june
jen.
Is
there
any
work
being
done
on
the
entrance
side
to
decouple
the
control
plane
and
the
I
mean?
Well,
I
guess
like
do
we
cache
anything
on
the
android
side
to
do
like
typha
style
behavior
like
could
you?
Could
you
separate
your
agents
like
if
you
wanted
to?
I
don't
know
if
june
jen's
still
here,
though
so
scott's
saying
actually
contour
is
kind
of
already
doing
that.
A
I
guess
they
they
kind
of
baked
it
in,
but
it's
not
generic
right
scott.
So
I
think
that's
kind
of
the
the
way
that
but
yeah
contour
kind
of
has
everything
cleanly
separated.
D
Every
every
container
talks
with
it,
yeah
and
and
to
be
fair.
I
spoke
with
steve
and
we
started
to
discuss
about
the
the
approach
that
contour
has
to
to
to
that
same
model.
I
just
didn't
got
enough
fine
man
as
I
do
all
of
those
things
during
weekend.
I
didn't
wanted
to
bother
like
steve
with
stuff
on
weekends,
so.
A
A
All
right,
this
is
good,
I
mean,
I
think
I
think,
we've
I
think,
you've
taken
enough
time
to
hang
out
with
us
today
and
I
know
you're
really
busy.
So
thanks
for
coming
to
hang
out
with
us
ricardo,
this
is
really
good.
I
A
I
wanted
to
say
that
ricardo
deserves
a
shout
out
for
the
fact
that
he
sort
of
joined
this
relatively,
like
nobody
wanted
to
work
on
it,
project
and
sort
of
sort
of
took
a
lot
of
ownership
of
it,
and
I
think
that's
really
cool,
because
most
of
us
try
to
join
the
hot
new
project.
Like
me
like,
I
was
like
I'm
going
to
just
go.
Do
campaign
because,
like
so,
it's
really
cool
to
see
folks
like
jumping
into
these
old
projects
that
need
help.
A
All
right,
so,
let's,
okay,
does
anybody
have
anything
else
they
want
to
talk
about
before
we
leave,
I
see
there's
a
lot
of
people
still
hanging
out.
Unfortunately,
we
we
end
at
one
hour.
We
don't
go
too
long
on
this
show.
So
if
anybody
wants
to
talk
about
anything
else,
we've
got
ricardo
here
if
anybody's
got
any
nginx
or
contour
or
obvi
questions
or
whatever
or
android
questions.
A
All
right,
bye,
everybody
thanks
matt,
thanks
for
showing
up
thanks
for
coming
and
hanging
out.
Okay,
see.