►
From YouTube: CNCF SIG Network 2021-04-01
Description
CNCF SIG Network 2021-04-01
A
Curious,
this
is
considering
what
today
is
anybody
have
any
stories
that
they
wouldn't
mind,
sharing
as
an
antidote.
A
Yeah,
some
of
the
april
fool's
jokes
get
so
subtle
that
I
was
sitting
there
reviewing
a
pr
earlier
today.
The
this
particular
contributor
does.
This,
like
you
know,
was
one
of
those
engineers
that
actually
enjoys
documenting
and,
like
writing
things
down,
and
it's
just
what
a
lovely
thing
and
so
consistently
this
this
contributor
will
take
a
well
usually
like
it's
an
animated,
gif
sort
of
demonstrating
the
function
or
the
thing
that's
changed,
and
and
this
time
I
sat
there
looking
at
it
and
finally
figured
out
it's
just
a
static
screenshot.
A
So
it
took
me
about
10
seconds
of
sort
of
waiting
for
it
to
start,
and
then
then
I
thought-
and
I
wasn't
sure
if
this
was
a
really
subtle
april
fools
thing
happening,
or
so
it's
about
as
it's
about
as
interesting
as
my
april
fools
gets.
I
guess.
A
We're
five
after
we've
got
mr
owens
with
us,
mr
blake,
whose
last
name,
despite
how
long
I've
known
blake,
I'm
still
not
blake.
Can
you
give
it
to
me
at
one
time
if
you
would.
C
Yes,
sorry,
I've
got
a
little
background
noise.
It's
blake,
covarrubias.
A
Yeah
very
good,
miss
rob
is
here
mr
farrell,
mr
ranganath
and
yuri
mr
t,
yeah
cool
good
and
mr
bell
as
well
good
deal
all
right,
we're
five.
After
does
of
the
topics
that
we
have
listed,
does
anyone
see
items
that
we're
missing
today?
A
So
if
you
do
please
pop
them
in,
I
anticipate
we'll
well
that
will
have
so
just
as
a
brief
overview
of
the
agenda.
We'll
have
at
least
probably
half
of
our
time
for
yuri
to
take
us
through
kgb
before
that,
we'll
cover
some
service
mesh
working
group
topics.
A
There
are
a
number
of
people
who
sent
regrets
today,
so
we
sort
of
cut
down
the
agenda
fairly
quickly
to
start
off
with
get
nighthawk.
A
There
are,
I
think,
all
the
votes
are
in
in
terms
of
those
who
or
I
guess,
if
I
guess,
if
you're
on
the
call-
and
you
haven't
taken
a
look
at
if
you
don't
aren't
familiar
with
this
project,
please
go
take
a
minute
if
you
haven't
whether
you're
familiar
with
the
project
or
not.
If
you
want
to
just
make
a
remark
or
a
vote
on
the
logo,
for
it,
please
do
indecision
is
awful.
A
And
so
the
I'll
give
a
project
progress,
update
on
behalf
of
a
couple
of
the
contributors
of
the
project
and
vinayak
is
here
with
us
now,
so
he
might
have
another
portion
to
this
update.
There's
been
been
progress
on
the
build
of
nighthawk
for
for
a
binary,
that's
compatible
with
the
base
image.
That's
used
for
the
meshri
project.
A
Those
builds
take
a
couple
of
hours
in
github
or
there's
a
custom.
Github
action
that's
been
written
now.
I
think
those
builds
take
a
couple
of
hours
in
part
because
of
all
that's
included
all
of
envoy's
tool
chain.
That's
included
for
those
builds.
There's
been
a
recent
contributor.
Jubril
gibril.
Are
you
on.
A
Nope
been
a
recent
contributor
that
had
was
trying
to
figure
out
if
they
could
get
nighthawk
on
alpine,
and
so
we
don't
have
I'm
unqualified
to
speak
to
that
so
and
jabril
isn't
here
so
the
last
item,
as
an
update
on
get
nighthawk
is
well.
We
have
a
few
of
these
today,
but
a
maintainer
nomination,
and
I,
for
my
part,
I
had
intended
to
get
to
this
a
bit
earlier
and
that
is
to
get
to
get
an
email
out
about,
in
this
case
about
vinayak
sharma.
A
So,
mr
sharma,
just
to
embarrass
you
a
little
bit
your
your
stewardship
of
the
project
site
and
how
you've
been
giving
accepting
prs
and
giving
direction
to.
I
don't
know
about
five
others
that
work
doesn't
go
unnoticed.
I
think
you've
shown
great
great
intentions
towards
the
project.
The
site
is
coming
along
nicely.
You've.
A
You
have
my
my
vote
or
I'd
like
to
put
you
up
for
maintainership
for
nomination.
I
think
we'll
we'll
get
an
email
out
on
the
mailing
list
about
that.
A
So
now
is
the
moment
that
you
should
speak
your
piece
and
and
get
out
of
this
if,
if
this
isn't
something
that
you
want.
D
Hi
everyone,
my
name-
is
vinayak
sharma
and
from
the
last
one
month
or
a
little
bit
more
than
that,
I
have
been
working
on
the
get
knight
hawk
website
and
collaborating
with
a
few
other
contributors,
and
it
would
be
great
to
get
nominated
as
a
maintainer
for
that
project
as
well.
A
That's
fantastic
good,
good,
good,
good
service
mesh
performance,
so
the
next
project
to
rehash
service,
mesh
performance
and
we'll
just
cover
mastery
in
the
same
swoop,
and
that
is
both
of
those
two
projects
have
been
being
advanced
through
the
service
mesh
working
group
they're
both
submitted
for
proposed
for
thanks
to
ken
and
they've
both
been
submitted
for
sandbox
consideration.
This
last
go
round
which
was
well.
It
got
rescheduled.
It
was
a
few
days
ago.
It
was
supposed
to
be
a
few
days
before
that.
A
I
think
we
only
the
toc
only
made
it
so
far
down
that
list.
If
you're
on
the
toc
mailing
list
you've
seen
how
far
down
the
they
got,
they
got
past
kagb.
A
So
yuri
will,
I
won't
steal,
thunder
here,
but
we'll
talk
about
that,
and
so
those
two
projects
are
up
for
review
next
time
around.
Usually
that's
a
two
month
split
or
two-month
gap.
In
this
case,
it's
a
month
out.
A
So
for
service
mesh
performance,
though,
as
it's
as
the
contributorship
and
the
maintainership
has
grown,
and
it
is
growing,
there's
been
and
as
as
it's
being
proposed
for
adoption,
there's
been
well
a
more
concisely
articulated
roadmap.
I
wanted
to
to
bring
this
up
as
hopefully
just
a
point
of
discussion
and
feedback.
There's.
A
I
a
there's
an
open
pull
request
on
it
to
on
the
roadmap
here
to
correct
a
couple
of
things.
A
But
I'm
I'm
gonna
let
this
settle
in
if
those
that
are
interested,
those
that
are
familiar
with
the
project
could
think
on
this
for
a
minute
and
express
opinion
about
how
this.
B
No
thanks
for
this
for
adding
me
to
review
this,
so
I
think
it's
a
really
good
start
as
to
dividing
into
them
suspect
or
publication
participation,
research.
It
covers
different
areas
in
terms
of
roadmap,
so
I
do
have
a
few
items
you
could
add,
so
I
think
some
of
them
are
being
captured
in
the
the
proposal
smp
sandbox
proposal.
B
We
could
also
add
it
here
in
terms
of
roadmap,
for
example,
some
of
the
load
generation
aspects
or
running
the
performance
aspects
across
a
distributed
cluster
for
not
self.
B
So
that's
another
thing
we
could
add
here
internally,
we've
been
doing
some
work
to
kind
of
provide
an
effectiveness
of
a
service
mesh,
so
yeah.
So
pretty
much
at
some
point.
Once
we
have
some
definition
there,
I
I'd
be
happy
to
share
as
we
go
forward
yeah.
Some
of
these
things
could
be
added
here.
Yeah.
I
think
I'd
definitely
take
a
look
at
this
a
little
bit
more
and
provide
an
update
to
this
nice.
B
B
Yeah,
I
think
that
it
does
touch
upon
the
distributed
performance
analysis.
Maybe
we
could
specify
some
more
details
along
those
lines
to
not
just
have
a
in
a
broad
charter
but
specify
some
more
details.
So
it's
very
much
clear
as
to
where
this
is
headed.
B
It
helps
we
could
divide
this
into
short-term
medium
term
type
of
goals
too.
Maybe
if
it
helps
for
someone
new
looking
at
some
of
these
things,
they
would
come
in
and
say,
okay
short
term.
If
this
is
what
you're
focusing
on,
maybe
I
could
help
you
along
those
lines.
A
A
Well,
it's
called
distributed
performance
analysis.
Maybe
it's
not
just
that
slide,
maybe
it's
one
or
more
of
these
here,
but
it
might.
It
might
be
that
these
may
or
may
not
be
helpful.
I
guess
I
thought
it's
worth
pointing
out.
A
One
item
that
I'm
just
reminded
of
in
looking
at
and
thinking
about,
research
is
yeah
how
we're
being
your
assistants
and
specific
and
and
others
that
might
be
interested
and
ken
has
actually
brought
forth.
Some
of
this
as
well
is
so
we've
got
a.
We
were
able
to
meet
with
any
rude,
the
professor
of
at
nyu
and
we
didn't
get
to
the
last
time
we
got
together.
A
We
didn't
get
to
include
mohit
of
nitk,
but
but
I'm
glad
that
you're
here,
focusing
because
some
of
your
help
in
managing
those
relationships
and
keeping
them
fresh
and
sort
of
having
a
kisses,
consistent
kind
of
cadence
to
those
interactions
will
be,
I
think,
is
it
will
be
really
helpful.
B
Yeah
absolutely
happy
to,
I
think
one
of
the
previous
calls
had
even
asked
about
it,
so
yeah
happy
to
help
in
this
regard.
A
One
item:
that's
it's
a
bit
of
an
action
item.
It's
well
does.
Let
me
ask
you
all
if
this
makes
sense,
so
the
you
know
the
the
service
mesh
working
group
it
hasn't
had
had
has
a
number
of
you
know,
small
initiatives
that
have
been
growing
and
growing,
and
some
of
those
like
smp
growing
enough
that
they're
kind
of
you
know
that
they're
as
big
as
it
is
now
do
like,
even
as
we
go
to
and
so
by
the
way.
The
next
topic
here
is
about
maintainer
nominations
and
suku.
A
Being
one
of
those,
it
strikes
me
that
the
we
can
send
out
that
type
of
a
nomination
on
the
service
mesh
working
group
mailer
that
that's
entirely
appropriate
and
probably
should
be
done,
but
also
that
there's
a
domain
name
associated
with
or
like
the
there
aren't
other
mailing
lists
specific
to
the
project,
and
so
I
guess
I
bring
it
up
as
like
food
for
thought
on.
A
B
Right
yeah,
I
was
thinking
that
too.
So,
maybe
I
think
the
new
newly
created
service
mesh
work
group
domain.
It's
a
good
start,
but
I
think,
as
we
have
a
lot
more
traffic
there,
we
can
subdivide
into
either
for
mystery
or
smp
or
at
nighthawk.
Some
of
these.
B
So
while
we
are
on
this
topic,
one
thing
I
had
shared
was
there
were
a
couple
of
volunteers
too
interested
to
get
started
on
this.
So
is
there
any
info.
A
Yeah
yeah,
as
a
matter
of
fact
so
yeah
cinco
thanks
thanks
for
asking
there's
a
couple
of
there's
a
couple
of
contributors.
Well
or
people
who'd
been
interested
for
some
time
and
had
thought
one
of
them.
Both
of
them
had
really
studied
some
of
the
goals
around
meshmark
and
smp.
A
One
of
them
had
studied
a
bit
more
deeply
around
nighthawk
and
it's
kind
of
kind
of
around
get
nighthawk.
It's
it's
all
sort
of
intertwined
some.
The
two
gentlemen
are
I'll
write
their
names
down
I'll
make
an
introduction.
I
had
recent.
I
think
I'd
recently
sent
them
both
the
draft
of
what
that
roadmap
looked
like,
because
they
were
in
part.
I
was
letting
them
know.
Hey
steam
is
building
steam,
is
building
it's
about
time
to
to
jump
in
and
because
they'll
need
some
they'll
need.
A
Your
guidance
they'll
need
some
guidance
and
it'll
be
your
guidance.
One
of
them
is
his
name's
channika.
A
I'm
misspelling
it
and
the
other
one,
the
other
one
done
some
linux
kernel
work
around
networking
nisarg
introduction
is
going.
You
know
immediately,
there's
no,
I
think
they'll
they'll
have
questions.
They
have
questions
that
you
can
help
answer
and
I
can
help
answer
as
well
around
you
know,
I'll
I'll.
Send
you
some
of
their
questions.
I
mean
they'll.
They're
gonna,
want
to
you
know,
scope
a
scope
of
scope
of
the
goal,
scope
of
the
work.
How
closely
can
they
engage.
A
Okay,
oh
yep,
so
then
I
I
sort
of
informally
said
that
there'll
be
a
couple
of
other
nominations,
so
sunku
ranganath
on
smp
as
a
maintainer
nick
jackson,
who's
been
well
he's
been
on.
A
This
call
a
number
of
times,
but
he's
long
been
a
supporter
of
smp
and
helped
shaped
a
few
of
the
well
a
few
of
the
the
initial
roadmap
items
actually
around
open
application
model
and
smi,
and
how
these
three
specs
ohm
smp
and
smi
line
up
so
and
then
otto
vandershaw
is
quite
keen
on
the
spec
he
and
jacob
mum
have
been
are
jacob
sobham.
I'm
sorry,
I'm
calling
him
by
his
github
name
of
google.
A
Otto
of
red
hat
jacob
of
google
have
long
been
also
been
very
supportive
of
s
p,
to
the
extent
that
something
like
s
p,
marries
up
nicely
with
their
focus
on
nighthawk
and
so
we'd
like
to
invite
and
nominate
otto
for
maintainership
as
well.
So.
B
Yeah,
thank
you.
Actually,
I
wasn't
prepared
or
ready
for
this,
but
thanks
for
putting
my
name
in
happy
to
help
you.
A
Soon
could
it
be
to
be
for
it
to
be
very
candid
here
like
actually
it's
because
of
your
assistance,
specifically
that
it's
given
enough
for
full
momentum
to
like
to
make
this
into
what
it
should
be
so
so
yeah?
It
sounds
great.
A
Some
sig
network
topics,
so
unless
I'm
mistaken
ambassador
for
the
project,
formerly
known
as
ambassador
emissary
ingress,
I
believe
that
it's
still
out
for
review.
Can
anyone
correct
me
on
that?
I
don't
think
that
it's
a
adoption
at
incubation
level
has
been
yeah.
A
Other
reviews
that
are
open
is
linkades
up
for
graduation.
It's
reviews
and
process.
William
morgan,
who
was
on
last
time,
has
been
really
helpful
with
lots
of
data.
Lots
of
helping
with
making
sure
the
write-up
is
getting
complete.
He's
been
so
helpful
that
I've
heard
from
him
almost
every
day,
she's.
That
team
is
ready,
for
you
know
public
review
and
so
the
the
sig
review
can
the
draft
of
it
will
be
in
your
inbox
later
today
for
your
input,
feedback
approval,
disapproval,
no
problem.
A
That
leaves
us
with
yuri
kgb
yuri.
There
was
I
I
get
it.
I
guess
I
I
gotta
say
this:
there
was
another,
was
it
was
it
yelp,
yet
another
load
balancer
that
was
sort
of
also
up
for
sandbox
review.
This
last
go
around.
E
D
A
Totally
well
you're
with
that,
let
me
stop
sharing
I'm
today
you're,
going
to
give
a
presentation
of
the
project
kind
of
cool,
introduce
it
to
everybody.
E
Right
hi
guys,
I'm
guri,
I
work
as
principal
engineer
for
apps
and
let
me
share
my
screen:
do
you
see
my
screen
guys?
Everything
is
cool
yeah,
so
I
tend
to
keep
like
minimal
sorry
for
that
minimal
amount
of
slides
and
just
to
provide
a
context,
and
then
we
go
right
to
the
live
demo.
So
kgb
was
originated
in
apsa
as
a
totally
open
source
project
from
day
zero
and
the
idea
behind
it
is
to
create
cloud
native
global
service
load,
balancing
solution.
E
Why
we
needed
it
is
pretty
much
our
business
needs
because
so
upset.
First
of
all
is
financial
organization
which
serves
african
continent.
It's
a
south
african
bank
pretty
much
and
the
usual
deployment
pattern
is
to
at
least
two
geographical,
disparate
clusters,
data
centers
and
to
achieve
reliability
and
availability
for
financial
applications,
and
given
that
a
substantial
amount
of
workloads
already
running
on
top
of
kubernetes,
we
needed
something
to
enable
global
global
global
service
load,
bouncing
like
kubernetes,
kubernetes
way
and
cloud
native
way.
E
E
One
of
the
things
that
differentiates
us
from
existing
solution
is
also
absence
of
a
single
point
of
failure.
So
we
do
not
have
anything
any
instance
that
passing
a
traffic
through
itself.
It
just
doesn't
exist
and
we
do
not
have
any
form
of
control
clusters.
So
there
is
no
single
point
of
failure.
Any
of
our
bottleneck,
the
controller
and
operator,
is
getting
deployed
right
to
the
target
target
clusters
where
workloads
are
running.
E
So
with
that
in
mind,
we
heavily
utilizing
the
actual
standard,
kubernetes
primitives
that
are
running
in
the
cluster.
So
it's
a
thunders,
ingresses
kubernetes,
service,
endpoints
and
everything
is
getting
drilled
down
to
the
ports
and
associated
pod
props.
E
Kgb
is
aware
of
internal
workload,
cluster
state
and
that's
how
it
reacts
to
the
workload
healthiness
or
or
not,
healthiness
and
steers
the
traffic
according
to
the
load
balancing
strategy
so
reacting,
the
on
the
port
status
and
again
application
teams
have
all
flexibility
to
define
these
props
as
detailed
as
they
want
so
specific
to
their
applications,
and
the
traffic
steering
itself
is
based
on
dns,
which
is
kind
of
battle
tested
by
internet,
and
we
are
benefiting
from
practical
reliability
and
obviously
we
have
some
limitations
around
dns.
E
The
most
prominent
one
is
a
time
to
leave
right,
ttl
and
how
fast
and
user
customers
are
getting.
The
updates
we
will
see
it
in
the
demo
and
solution
is
designed
the
way
that
is
as
much
er
diagnostic
as
possible,
meaning
that
we
do
not
create
dns
records
in
a
environment
dns
like
crowdfit
history
or,
like
informally,
using
infoblox
or
ns1,
whatever
we
are
only
automatically
configuring
dns
zone
delegation,
which
points
and
redirects
the
the
dns
queries
down
to
our
coordinates
ports,
which
are
integral
part
of
kgb.
E
So
we
are
answering
to
dns
queries
with
our
dns
responses
that
are
dynamically
modified
according
to
the
global
global
load,
balancing
strategy
and
associated
workload
healthiness.
So
from
implementation
standpoint,
we
based
our
solution,
initial
bootstrap
by
operator
framework,
and
we
following
the
recent
upgrades
and
trying
to
keep
up
with
the
project.
So
we
actually
like
switch
to.
We
started
testing
with
the
release
0.6,
where
it
was
quite
a
little
bit
disconnected
from
kit
builder.
E
Now
it's
pure
builder
and
giant
distancing
on
top-
and
we
migrated
to
the
recent
version
and
try
to
keep
up
with
upstream,
is
like
at
least
-1
release
coordinates
is
very
important
part
of
kgb.
So
that's
exactly
the
part
that
provides
the
dns
responses.
External
dns
is
used
to
integrate
with
this
environment,
dns
and
new
infrastructure,
so
like
route
53,
if
you
and
aws
is
one
of
good
examples
of
that.
E
Even
it
has
quite
a
good
number
of
dns
providers
out
of
the
box,
and
we
used
to
have
special
lcd
dedicated
cluster
and
cd
operator
to
to
act
as
a
backend
for
coordinates,
but
we
deprecated
it
by
developing
a
special
coordinates
plugin,
which
is
capable
to
read
dns
and
point
crd
right
from
like
right
from
kubernetes
city
like
through
kubernetes
api,
instead
of
using
a
standard
backend
for
a
cd
cluster
which
is
sky
dns.
E
So
it
we
used
to
have
quite
a
amount
of
reliability,
problems
with
city-based
setup
and
lcd
operator
from
coordinates.
Originally
it's
already
deprecated,
so
we
invested
in
developing
just
special
plugging
and
kgb,
currently
consists
of
just
three
components:
the
controller
coordinates
and
external
dns
made
it,
making
it
the
whole
setup
much
more
reliable
and
this
project
drives
only
single
crd
of
kind
gsob
and
that's
it.
E
So
we
try
to
keep
things
as
simple
as
possible
from
a
management
perspective,
oh
from
integration
with
other
projects,
so
we
call
it
tagit,
dns,
it's
environment,
one!
So
we
tested
heavily
like
you
know,
info
blocks
and
route
53.
They
should
be
production.
Ready.
Ns1
is
already
well
very
well
tested.
We
just
do
not
use
it
yet
and
like
our
scale,
but
all
the
tests
are
passing
and
again
potentially
works
for
other
providers.
E
By
is
that
external
dns
provides,
but
we
heavily
tested
only
these
three
and
there
are
another
projects
open
source
projects
that
we
integrated
with
an
admiralty
is
one
of
the
good
examples
where
admiralty
is
used
for
global
workload.
Scheduling
across
multiple
clusters
and
kgb
is
enabling
global
load
balancing
for
this
global
workload.
So
we
have
a
nice
tutorial
there
on
admiralty
project,
page
yeah,
and
we
can
get
straight
to
the
demo,
but
before
that
just
provide
the
context
on
the
demo
setup.
So
I'm
on
a
kgbio
page.
E
E
E
So
that's
pretty
much
it
up
to
two
data
centers
and
in
adwords
transits
to
the
edibles
regions
and
that's
where
we
start
our
demos.
A
Yuri,
I
haven't
got
something
of
an
ignoramus
question,
and
that
is
well.
This
is,
is
that
is.
A
This
is
like
the
the
primary
factor
that
you're
using
there
are
dns
zones
right,
like
the
from
the
perspective
of
a
client
looking
to
get
to
a
service
and
the
path
that
they
follow,
as
as
they
initiate
a
dns
request
like
is
it
is
that
primarily
zones
that
are
being
used
for
that
and
then
the
affiliation
of
services
to
a
given
dns
zone.
E
Yeah,
it's
not
a
so
dina's
zone
is
the
same,
so
everything
like
is
behind
the
same
of
cdm
so
well.
Basically,
I
maybe
can
start
a
demo
to
answer
to
unpack
the
answer
for
a
question.
So
in
the
right
plane
we
will
run
a
test
script
which
is
basically
doing
this
stuff,
like
curling
the
test
application.
E
So
we
already
have
kgb
installed
on
two
clusters
and
a
test
test
workload
and
the
workload
is
a
standard
test
application
from
viewworks,
pretty
popular
one,
it's
pod
info
and
we
detect
each
deployment
as
associated
with
geographical
location,
just
for
visibility
right
what
we
are
acquiring
so
currently
we
are
testing
the
failover
strategy,
where
we
have
a
primary
data
center
print
in
europe
and
a
secondary
one
in
africa,
so
how
it
looks
like
from
setup
standpoint.
E
Already
installed,
so
we
have
exactly
these
three
components:
kgb
operator,
controller
itself,
which
drives
all
the
logic
like
orchestrating.
It
coordinates
to
handle
the
dns
queries
and
responses
and
external
dns.
This
one
is
special
for
router
history,
which
is
deployed
according
to
the
helm,
values
configuration
and
it
is
handling
this
as
on
delegation
automatically.
E
E
Sorry
it
was,
it
was
hell.
Well,
it's
yama's
back
definition.
Is
there
so
our
api
group
for
kgb,
kai
and
gclb
standard
metadata
and
what
we
are
doing
here.
We
have
embedded
ingress
pack
as
a
part
of
giselle
b
spec.
So
it's
a
pretty
standard
ingress
with
specifying
cost
and
associated
service
right
port
and
pass.
So
it's
actually
the
same
ingress
type
in
a
go
link
behind
the
scenes
right.
So
we
just
embed
it
into
this
adjustability
instance.
E
So
controller
reacts
to
that
creates
associated
ingress
for
global
load
balancing
and
makes
additional
and
performance
additional
actions
according
to
the
strategy.
So
it's
compo
composes,
the
spec
is
composed
of
the
standard
congress
plus
gslb
strategy
to
follow.
E
So
in
this
specific
case,
we
we
have
a
failover
strategy
and
we
are
pinning
primary
geographical
tech
to
be
eu
west
one.
E
E
This
gslb
is
already
deployed
here
as
a
test,
gslb
failover,
and
we
can
see
it
runtime.
What
kind
of
status
does
it
have?
So,
as
you
can
see
exactly
the
same
spec-
and
here
we-
we
have
a
current
cluster
geotech
and
a
healthy
record,
so
it
identifies
healthiness
of
the
of
the
workload
again
transitively
through
through
the
service
and
a
number
of
endpoints.
E
So
basically
it's
again
the
state
of
pot
liveness
and
helsinus
props
and
it
populates
dns,
dns
record
visa.
E
Let's
say
health
cap
addresses,
so
there
is
additional
kind
of
internal
dns,
endpoint
crv,
now
which
we
are
using
this,
the
crd
from
external
dns
project
from
the
crd
source.
So
if
we
get
the
ammo
here
so
you
can
see
that
failover
is
populated
with
this
ip
addresses
and
the
accordions
given
it
has
our
special
crd
plugin.
It
is
capable
to
read
from
this
crd
and.
E
E
So
in
our
edible
less
scenario,
we
have
a
network
load
balancer
like
a
local
load,
balancer
in
which
is
sits
in
front
of
the
workload.
So
it's
like
standard
ingress
engines
deployed
here
in
test
setup
and
we
have
this
associated
and
I'll
be
deployed.
So
if
we
make
a
dick
to
this
nlp,
that's
exactly
these
three
ap
addresses.
So
we
are,
we
are
assuming
the
workload
is
healthy.
We
populating
dns
response
with
a
healthy
network
load
balancer
ip
addresses
associated
with
the
workload.
E
So
same
number
of
nodes
and
exactly
the
same
spec
for
for
jsob
for
failover,
so
we
do
not
modify
anything.
We
just
applying
same
spec
on
another
cluster
without
any
modification
so
and
another
secondary
cluster.
E
Also
aware
that
primary
is
a
e-roof,
so
it
is
returning
consistent
responses,
so
it
returns
also
ip
addresses
for
european
data
centers.
Now,
because
the
vocals
there
is
healthy,
so
let's
try
to
emulate
some
form
of
overclock
failure.
So
we
again
in
europe.
E
Just
scale
the
test
workload.
E
E
That's
why
we
are
still
hitting
the
open
point
and
we
operating
with
the
dns
ttl
limits
and
currently
ttl
is
30
seconds
plus
some
like
deviation
with
us,
which
associated
to
reconciliation
loop,
and
here
we
already
see
a
switch.
So
there
was
in
case
of
followers,
there
is
small
downtime
and
now
we
are
already
steering
traffic
to
africa,
because
30
seconds
ttl
is
expired
and
we
already
clearing
the
healthy
workload
in
a
secondary
data
center.
E
E
Ip
addresses
of
euro-
and
you
can
see
in
this
demo
querying
loop
that
it's
already
failed
over
back
to
to
europe,
so
there
was
no
downtime,
because
workload
in
africa
and
secondary
was
always
healthy.
So
that's
another
use
case
how
we
can
steer
the
traffic
in
a
controlled
way.
If
you,
if
you
like
to
so
see,
I've
seen
things
we're
doing
like
a
mono,
no
pin
of
the
main
data
center
from
one
to
another,
so
actually
making
some
form
of
global
blue
green.
E
That
was
another
unexpected
use
case
of
kgb
that
we
seen
from
end
users
yeah.
So
that's
pretty
much
failover
strategy
and
we
have
second
one
is
round
robin,
which
is
just
yeah.
It's
demonstrated.
E
It's
it
basically
returns
the
mixed
response
from
both
those
data
centers.
So,
as
you
can
see
it,
the
respawn
dns
response
will
contain
an
ip
addresses
for
europe
and
ip
addresses
for
africa
and
and
the
response
will
mix
them
up.
E
So
if
you
make
a
dick
all
right
here,
we
go.
It's
like
this
totally
mixed
mixed
response,
so
we
also
have
a
road
map
to
make
it
more
consistent
to
us
to
steer
the
traffic
and
railroad
in
a
more
predictable
way
like
5050,
but
currently
it's
a
standard.
E
You
know
very
random
ground
robin
over
the
geographical
data.
Centers
and
yeah.
That's
pretty
much
two
basic
strategies
that
we
utilize
in
apsa
and
it's
enough
for
our
business
case
and
definitely
have
some
more
advanced
stuff
in
our
roadmap
and
trying
to
gather
some
feedback
from
communities
the
next
one
would
be
probably
so
something
about
geographical
proximity
and
this
kind
of
things
in
this
case
you'll,
have
to
create
some
advanced
coordinates
plugins
to
modify
the
responsive
zone
fly
according
to
in
the
situation.
E
Currently,
the
controller
makes
it
like
a
composition
way
by
populating
dns,
endpoints
crd
and
for
very
dynamic
geographical
proximity.
Geographical,
like
location,
closest
location
strategy,
we
will
need
to
modify
it
already
on,
like
coordinates
level
so
yeah.
E
What
else
forced
to
mention
yeah,
as
we
mentioned
today
on
tuesday
cncfduc
awarded
to
kgb,
be
accepted
as
a
sandbox
project
to
see
the
oceans
here.
So
we
are
super
happy
about
it.
That's
pretty
much
it
do.
You
guys
have
any
questions.
B
So
one
question:
thanks
for
the
demo
and
the
information,
so
in
terms
of
load
balancing
you
know,
so
what
are
some
of
the
aspects?
I
know
I
mentioned
about
failover
and
reliability
aspects.
What
are
some
of
the
other
aspects,
that
for
incoming
traffic,
that
you
can
start
load
balancing.
E
Well,
we
operate
now
only
two
factors
underlying
both
healthiness
of
the
target
workload
and
the
load,
balancing
logic
that
that's
it
so
from
we
do
not
imply
any
kind
of
end-to-end
health
checks.
It's
it
is
just
readiness
and
liveness
props
and
they
can
be
as
sophisticated
as
application
team
wants
to
be.
So
that's
a
cool
idea
to
provide
the
power
and
control
to
application
team
over
the
global
load,
balancing
for
their
applications.
B
Yeah
got
it
okay,
and,
and
how
does
something
like
this
coordinate
with
things
like
api
gateways
or
service
meshes
I'm
relatively
new
in
this
area?
So
I'm
just
curious
how
this
coordinates
with
the
deployments.
E
Yeah
so
far
we
didn't
integrate
with
any
form
of
service
mesh,
but
what
strategy
we
currently
employ
is
actually
the
airline
on.
E
Pretty
much
on
ingress
status
right,
so
we
ingress
controller
agnostic
and
we
are
getting
this
addresses
or
like
a
set
of
ipa
at
you
know,
is
getting
populated
by
associated
ingress
controller.
So
it's
inaudible
traffic
like
whatever
can
be
potentially
some
service
mesh
assuming
it
operates.
E
It
controls
ingress
and
doesn't
operate
purely
with
some
special
crds
right,
so
so
currently
like
yeah
in
a
not
direct
integration
point
is
this:
whatever
is
getting
into
ingress.
Spec
would
be
maybe
better
to
make
it
yeah.
E
E
So,
okay,
that's
the
that's
the
current
current
way,
how
it
works,
not
sure,
if
maybe
in
future,
we
will
extend
it
to
some
other
crds.
Even
we
will
have
some
advanced
service
mesh
deployment,
but
currently
we
never
actually
like
tested
or
integrated
into
more
sophisticated
from
service,
mesh
and
point
environment.
A
And
yuri,
how
do
you
when
you're
asked?
How
do
you
classify
the
project
as
a
as
a
you
know,
a
custom
kubernetes
operator
or
as
a
custom,
ingress
controller?
I'm
assuming
it's
not.
E
E
Yeah
grass
controller
should
be
there
around.
Otherwise
there
will
be
nothing
in
this
status
and
there
will
be
no
information
to
populate
to
dns
record
gotcha.
A
And
yeah
you're
right,
I
think,
as
you
got
as
you
mentioned
some
things
I
think
maybe
some
road
map
strategies
with
respect
to
geoproximity
geolocation
and
some
advanced
calculations
that
would
probably
probably
require
deep
integration
into
core
dns
yeah.
I
think
that
was
what
I
was
having
a
really
hard
time.
Framing
a
question
around
earlier
was
was
those
types
of
strategies,
so
that
makes
sense
you
know
elegant
in
terms
of
oops
elegant
in
terms
of
how
you're
relying
on-
and
I
guess
it
you
say
it-
you
stipulate
it
in
your
goals.
A
I
don't
know
what
point
it
is,
but
dude
more
or
less
you
know
leverages
you
it's
pretty
kubernetes
native
or
I
mean
like
you
know
it's
the
it's
well,
it's
sort
of
the
answer
is
whatever
the
readiness
probes
and
the
liveness
probes.
D
A
Sort
of
what
you
you
know
and
yeah
done
only
through
an
operator
done
so
the
almost
all
go
or
like
any
any
material
part
of
the
project.
Anything
but
go.
E
Nice,
it's
really
good.
It's
really
good
length.
The
only
non-go
code
is
like
our
pretty
huge
make
file,
but
it
doesn't
count.
E
Yeah
for
sure,
that's
how
we
actually
checked
is
pretty
important
part
of
the
project,
because
it's
not
just
installation
it
also.
It
does
also
have
like
important
configuration
points
which
affect
the
load
balancing
operation
further,
so
we
actually
taking
this
initial
taking
the
cluster
with
initial
helm,
installation
so
reinstalling.
E
This
is
configuration
for
us
one
right,
so
we
specifying
geotech
us1
and
we
specifying
the
neighbor.
That's
another
jso
b
enabled
clusters
that
it's
going
to
work
with
here
and
then
like
through
a
convention
or
a
configuration
they
started
to
talk
and
share
information,
also
over
dns
and
the
similar
configuration
it
is
for
africa
right.
So
it's
kind
of
flipped,
so
it's
a
cluster
geotech
and
the
another
cluster
to
talk
to
is
hero.
So
I
already
showed
to
use
this
dns
and
it's
actually
on
the
screen.
E
Yeah
v
populates
a
special
fqdn
kind
of
service,
one
which
is
not
exposed
to
user,
but
it's
just
around
so
clusters
acquiring
each
other
for
this.
This
is
for
this
special
service
of
qdn
and
basically
is
asking
about
the
health
and
health
status
of
associated
workload
under
control
from
another
cluster.
E
So
if
we
go
there,
so
they
just
asking
each
other
continuously:
every
cancellation
loop
and,
for
example,
in
in
case
of
round
building
strategy,
each
of
the
cluster
will
return
all
of
the
ip
addresses
right
from
both
digital
enabled
clusters
and
whenever
the
workload
will
be
that
there,
these
in
a
cluster
whenever
workload
will
be
that
in
africa,
for
example,
european
cluster
will
learn
about
this
fact.
E
Through
this
special
fq,
then,
and
assuming
it
will
be
totally
degraded
like
meaning
no
no
targets
or
partially
degraded
mean
one
or
two.
Instead
of
like
full
screens,
this
specific
as
an
example,
it
will
modify
the
final
response
accordingly.
A
And
then,
by
the
the
geo
tag,
the
the
strings
that
you're
using
there,
they
don't,
they
don't
have
any
special
there's,
no
special
convention.
Today,
no.
E
No,
no,
it
can
be
anything
in
this
example.
We
just
named
it
like,
as
it
was
regions,
but
you
can
name
it
whatever.
You
like.
F
Two
short
question:
it
might
be
unappropriate
one
first,
one
it's
viva
land
or
it
can
be
two
multiples
on.
E
E
Actual
question:
yeah,
that's
a
great
question.
So,
by
design
we
are
not
limiting,
we
are
not
limiting
amount
of
clusters
to
operate
so
here
we
have
a
comma
separated
list
right
and
the
round
robin
already
works
out
of
the
box.
But
failover
strategy
is
honestly
not
really
ready.
This
kind
of
work,
but
the
secondary
will
be
not
obvious,
so
we
have
actually
an
issue
in
our
github
to
test
kgb
in
more
more
than
two
cluster
deployments
to
make
our
operation
more
ready
for
the
scenario.
E
E
F
E
Yeah,
it's
our
stuff.
It's
it's
ours,
our
customer
resource
definition,
which
is
so
kgb
controller
yeah
reacts
to
this
3d
custom,
resource
presence
in
the
clusters
and
creates
associated
dns
points
and
ingresses
and
overall
automation.
According
to
the
spec.
E
Thank
you
cool
closing
thoughts,
and
maybe,
while
we
on
the
spec,
it
may
be
worth
to
mention
that
we
have
so.
We
operate
a
single
crd
right,
which
is
pretty
like
convenient
way
to
steer
the
the
traffic,
but
during
adoption
in
apsa,
we
realized
that
even
some
additional
crt
may
be
like
a
little
bit
overhead
for
teams,
given
that
they
already
have
the
established
helm,
charts
they
like
a
new
type
drop-in,
might
be
as
an
overhead
and
plus
operating
on
a
like
pretty
reasonable
scale.
E
We
have
more
than
one
120
clusters
so
propagating
airbag
rulers
there
to
enabling
a
new
client,
a
new
apn
point
for
every
team,
also
a
little
bit
burdened
for
operations
team
as
well.
E
So
assuming
your
workload
already
has
standard
ingress,
you
can,
and
it
is
most
probably
the
case
you
can
create
the
annotations
on
already
existing
ingress
or,
like
extends
your
pre-existing
counter
with
the
json
we
strategize
there
and
in
case
of
failovers.
It
will
be
a
primary
geodetic
and
the
controller
and
the
kgb
controller
will
react
to
it
and
will
create
the
gslb
resource
automatically
out
of
the
annotations
and
we'll
we
will
just
link
existing
ingress
with
a
visa.
Jso
b
type
is
the
jso
cr
and
it
will
close
all
this
way.
E
So
that's
another
way
to
enable
global
load
balancing
for
our
workload
and
it
helped
us
with
adoptions,
internal.
A
E
A
Thank
you
for
this
yuri
it's
nice
to
nice,
to
dig
in
kudoz
on
the
project
being
adopted.
This
is
well,
I
think,
mr
pharaoh.
Daniel
is
gonna,
follow
closely
in
your
footsteps.
I
think
with
submariner,
so.
A
Thanks
a
bunch
thanks
all
for
coming
and
we're
out
of
time
catch
you
in
a
couple
of
weeks.