►
From YouTube: Service APIs Meeting (APAC Friendly Time) 20200521
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
this
is
the
service.
Api
is
meeting
for
May
21
and
we've
got
a
lot
to
talk
about
today.
As
always,
anyone
can
feel
free
to
add
things
to
the
agenda.
I
tried
to
scope
out
some
things
that
I
thought
were
worth
covering
in
more
detail
but
feel
free
to
add
things
to
any
agenda
just
so,
we
make
sure
we
cover,
what's
most
important.
A
A
B
I
think
the
things
that
are
hidden
here
pretty
final
there's
a
couple
remaining
changes
that
I
that
I
need
to
push
up
to
kind
of
fill
this
out
a
little
bit
more.
You
know
some
of
the
bookkeeping
like
graduation
criteria
and
how
we're
gonna
test
it
things
like
that.
B
The
decisions
about
queue
proxy
that
the
last
Signet
meeting,
for
example,
but
I-
think
the
stuff
that's
here
is
pretty
representative
of
what's
gonna
happen,
I
think,
there's
a
few
more
things
that
need
to
get
spelled
out
like
how
we're
gonna
handle
headless
services.
That's
not
really
defined
yet
I!
Think
in
here
it's
still
a
to
decide,
but
I
think
we
came
to
some
good
good
ideas
there.
So
I've
got
a
couple
PRS
to
go
up,
but
this
I
think
this
is
a
good
starting
point.
Okay,.
A
So
in
this
case,
traffic
splitting
is
in
the
context
of
splitting
traffic
between
different
services
or
other
similar
resources,
and
my
theory
is
that
this
can
apply
equally
well
to
multi
cluster
resources,
whether
these
service
imports
or
service
exports
or
whatever
it
might
be
configuring.
The
proportion
of
traffic
rather
to
each
cluster
is
part
of
a
multi
cluster
service
and
can
be
solved
with
the
multi
cluster
API
that
it
that
is
kind
of
the
key
thing
here
that
in
in
my
mind,
as
part
of
this
proposal,
multi
cluster
traffic
splitting
is
has
two
layers.
A
There
is
a
multi
cluster
configuration
of
how
any
service
distributes
traffic
between
different
clusters
and
there's
our
traffic
splitting.
On
top
of
that,
that
is
more
app
focused.
That
would
then
indicate
you
know
app
application
versions
or
different
forms
of
configuration
or
canary
canary
bata
those
kinds
of
differences,
and
you
know
I
think
one
thing
that
has
to
be
explored
any
time
you
look
at
traffic
splitting
is
what
is
your
lowest
level?
What
is
that
lowest
unit
the
atomic
unit,
that
is
the
bottom
of
your
traffic?
A
Splitting
design
and
I,
looked
at
and
explored
the
possibility
of
adding
some
kind
of
layer
below
service
that
in
that
might
include
a
new
field
on
service
that
would
allow
you
to
say,
okay,
but
this
service
should
have
endpoint
slices
that
are
split
on
unique
version,
unique
values
of
this
version
label
as
an
idea.
So
then,
you
can
target
endpoint
slices
that
have
four
for
the
same
service
that
had
a
version
1
or
version
2,
etc.
A
Endpoints
themselves
would
not
really
have
a
great
path
to
support
sub
setting,
so
this
would
be
dependent
on
endpoint
slices
and
adding
any
kind
of
sub
setting
API
to
services
without
any
corresponding
traffic.
Splitting
implementation
might
feel
that
out
of
place.
So,
if
you
add
some
some
component
of
traffic
splitting
to
services
but
I,
don't
have
any
way
to
do
anything
with
that.
It
may
feel
a
little
bit
strange
and
traffic
splitting
you
know
it
is.
A
It
would
be
nice
if
it
could
support
call
7
concepts
I,
like
header
base
splitting,
which
is
just
never
going
to
be
compatible
at
a
service
level,
so
defining
traffic
splitting
on
both
in
both
route
and
service
I
think
would
also
be
confusing.
So
I
thought
that
it
would
make
sense
to
avoid
going
down
this
rabbit
hole
and
considering
that
service
was
the
smallest
resource.
C
I
have
a
question
of
comment
from
east
you're
coming
in.
In
reality,
the
post
people
split
the
traffic
in
the
canary
and
the
production,
or
something
as
a
kinetic
LED
play
in
five
clusters:
production.
Other
three
clusters:
the
structures
a
case
a
bit
if
to
express
this
kind
of
stuff,
because
I
want
5%
to
go
to
Connery,
regardless
of
what
class
three
teas
and
I
and
95%
to
go
to
production
yeah.
This
seems
to
be
more
oriented
about
splitting
between
clusters,
not
splitting
between
subsets,
where
subsets
are
spanning
multiple
clusters.
A
A
A
A
A
C
A
A
C
B
A
D
So
I
think
like
it
might
be
worthwhile
to
go
into
because
some
people
might
not
be
familiar
with
the
overall
like
wave.
The
multi
cluster
stuff
is
is
proposed
to
be
organized,
which
is
basically
that
this
service
import
in
the
name
of
that
service
is
going
to
be
a
cross
cluster
concept
that
that
app
one
canary
is
refers
to.
Actually
several
you
know,
services
that
are
distributed
across
different
clusters,
I
think
Jeremy,
maybe
just
like
a
couple
sentences
to
give
some
context.
Good
yeah.
B
Yes,
I
yeah,
so
a
little
bit
of
background
there.
Basically,
a
service
import
is
a
multi
cluster
abstraction
over
services
that
exist
in
other
clusters,
so
the
the
basic
API
is
you
create
a
service
export
to
say
that
you
want
a
service
to
be
visible
to
the
clusters
in
your
group
and
that
creates
a
service
import
in
each
importing
cluster.
That
represents
that
service,
but
the
backends
might
be
in
any
number
of
other
clusters
or
they
might
be
in
the
same
cluster
you're
in,
but
accessing
the
service
through
the
service.
E
E
E
A
E
Let's
say
canary
was
in
this
cluster.
The
way
that
we've
designed
for
two
is,
we
could
say:
oh
you
know,
I
only
need
to
use
a
service
in
court
for
prod,
because
that
isn't
some
other
cluster
and
I
could
have
represent
a
service
of
apt.
One
canary
could
be
just
a
standard
service
resource
instead
of
a
service
import
resource
if
canary
was
locally
running
locally
in
this
cluster
right,
yeah.
A
E
C
C
A
All
right
and
well
I'll,
just
I'll
back
up
a
second
and
I
we've.
We've
generally
already
covered
this,
but
in
any
multi
cluster
model
as
I
understand
it,
there
will
be
a
config
cluster
that
contains
configuration
that
can
apply
applied
globally
and
in
my
in
this
proposal,
I'm
suggesting
that
gateways
and
routes
could
live
in
this
cluster
and
then
they
would
reference
service
imports
or
other
such
things
that
were
in
each
individual
cluster.
There
there's
no
hard
and
fast
rule.
A
B
I
think
that
does
make
sense
from
I'm
from
the
multi
cluster
services
definition
standpoint.
We've
been
trying
really
hard
not
to
define
anything
like
that.
There
has
to
be
a
you
know,
a
config
cluster
or
centralized
model,
or
you
know
an
implementation
could
be
free
to
be
decentralized,
but
this
absolutely
would
work
this.
This
fits
the
model,
I
think
we've
just
been.
We
haven't
made
the
assumption
that
there
is
in
fact
they
can't
a
cluster
okay.
F
F
D
I
think
the
thing
to
highlight
is
not
that
it
there's
a
single
config
cluster.
Is
that
right
now,
when
we're
designing
the
api's,
we
do
not
expect
to
solve
the
problem
of,
for
example,
having
people
define
things
in
multiple
places
and
somehow
merging
it
automatically.
At
least
right
now
like
that
seems
like
a
super
hard
problem
to
do
at.
C
D
A
C
A
D
I
think
I
think
that
note
here
is
that,
because
we
have
a
TCP
route,
we
can
have
a
notion
that
it's
going
to
be
like
her
connection
or,
like
you
know,
defined
it
more
specifically,
and
then
the
other
thing
to
note
is
like
right.
Now
we
have
provisionally
this
weight,
which
we
haven't
defined
very
carefully.
You
know
that's
going
to
be
a
further
discussion
as
to
at
what
exactly.
Actually
that
means,
but
you
know
for
illustration
purposes:
there's
some
kind
of
configuration.
That's
going
to
tell
the
system
how
much
traffic
is
sent
to
each
place.
C
If
I
can
make
a
request,
it
will
always
be
useful
to
me.
Speak
about
TCP
tours
include
TLS,
because
very
few
people
are
using
TCP.
Hopefully
those
this
so
examples
with
TLS
are
be
and
are
very
useful
and
then
assume,
since
your
host
name.
You
implies
that
it's
also
wordpress
and
I
routing
write
and
TLS.
So
it's
wonderful,
yeah,
yeah
and.
A
The
other
key
thing
here
that
this
is
really
you
know
I
hate
to
say
a
shot
in
the
dark,
but
it's
it's
really
tentative.
This
is
an
idea
of
how
this
multi
cluster
traffic
distribution
could
work
and
keep
in
mind
with
this
whole
dock.
I'm
really
saying
this
is
not
really
the
scope
of
service
API
is
to
define
this,
but
I
just
wanted
to
show
potential
ways.
These
two
levels
could
could
work
together.
So
as
an
example,
this
this
is
likely
not
going
to
happen.
A
The
way
I
have
expect
out
right
now,
I,
you
could
have
a
something
like
a
max
rate,
annotation
or
field
on
either
service
or
service
import
that
oh,
that
further
specifies
the
amount
of
traffic
that
should
go
to
a
specific
cluster.
In
this
case
it
would
be
the
maximum
number
of
requests
or
maximum
number
of
connections
is
another
common
way
to
do
this.
The
the
most
difficult
thing
here
is
finding
a
a
combination
of
fields
or
values
we
can
base
this
off.
A
Of
that
enough,
implementations
could
actually
support
and
max
rate
seemed
like
a
relatively
well
supported
one,
but
it
doesn't
give
us
everything.
Maybe
we
might
want,
but
one
of
the
key
things
we
want
to
be
able
to
do
is
still
have
a
service
represented
in
a
cluster
with
its
pods
and
everything
else,
but
turn
down
all
traffic
to
it
or
gradually
phase
out
traffic
to
it
or
gradually
phase
in
traffic
to
a
specific
cluster
and
something
like
max
rate.
B
Maybe
is
more
of
a
multi
cluster
service
thing
like
I,
think
I
think
this
needs
to
evolve
with
service
api's,
but
maybe
maybe
the
right
place
for
this,
because
because
this
doesn't
really
make
sense
in
a
single
cluster
context,
would
be
the
service
export,
which
is
that
that
resource
that
you
define
to
to
denote
that
a
service
should
be
public,
like
that
seems
like
a
reasonable
level
for
this
kind
of
you
know
this
property
like
this.
This
property
only
has
meaning
in
multi
cluster,
but
it,
but
it
also
only
reflects
state
of
the
local
clusters.
D
C
D
C
I
want
to
say
it
actually
does
make
a
lot
of
sense
in
in
single
cluster,
especially
if
it's
defined
as
max
rate
per
endpoint,
because
you
know
circuit
breakers
and-
and
you
know
how
to
allocate
to
load
balance-
it's
very
important
in
that's
drawn.
So
it
will
be
super
useful
target
in
any
to
map
to
our
API
as
well.
Right.
E
A
Well
and
always
feel
free
to,
or
please
comment
on
this
talk
with
other
ideas.
I
see
constant.
You
you
had
a
comment
here,
that's
very
helpful,
yeah
I
think
max
rate
per
endpoint
would
be
useful,
but
again,
like,
like
Jeremy,
alluded
to
it's
hard
to
add
attributes
to
service
that
ku
proxy
can't
support,
but
maybe
we
have
to
start
yeah
I'm,
not
sure
and.
E
B
E
Now
would
get
that
forward
to
target
and
the
PR
that's
out.
There
is
basically
saying:
okay,
we
can
apply
weights
to
each
individual
target
with
a
target,
let's
say
being
a
service,
and
so
my
understanding
with
the
kind
of
using
weights
was
that
if
we
said
okay
for
this
target,
I
want
to
quote
unquote.
Take
it
out
of
service.
Let
me
change
the
weight
for
this
target
to
zero,
so
we
stopped
forwarding
request
to
that
service.
A
Yeah
that
this
definitely
gets
tricky
without
the
multi
cluster
component
added
in
the
the
idea
I
had
been
thinking
or
the
way
I've
been
thinking
about.
This
is
that
max
rate
would
be
a
multi
cluster
concept
which,
as
it
turns
out,
it
could
apply
beyond
multi
cluster,
but
it
would
be
informative
when
you
target
a
service
import.
You
know
kind
of
a
global
service
if
max
rate
is
already
accounted
for
in
a
specific
cluster
than
you
waterfall
over
to
another
cluster
that
still
has
capacity.
A
C
D
Danya,
this
is
a
very
complicated
thing,
because
I
think
the
example
that
just
to
put
it
up
there,
that
is
interesting,
is
if
you
have
canary
and
non
canary
and
the
canary
completely
dies
if
you're
sitting
five
percent
they're.
Probably
you
don't
want
to
buckle
question
mark
so
that
that's
one
example
that
we're
working
sort
of
playing
with
to
see
if
we
can
do
it
right
well,
I'm.
A
C
C
C
A
Yeah
and
then
just
as
you
know,
going
back
to
a
very
simple
example
without
any
traffic
splitting.
This
is
just
an
example
of
how
you
would
target
a
multi
cluster
service
directly
without
any
form
of
waiting
or
additional
traffic.
Splitting
really
simple,
really
straightforward,
looks
very
similar,
just
without
any
traffic
splitting
and
and
finally
there's
there
was
an
idea
of
a
full
canary
cluster
that
I
thought
was
worth
exploring
here
and
that
the
idea
here
is
that
some
organizations
choose
to
replace
entire
clusters
instead
of
doing
in-place
upgrades
as
an
example
and
so
correct.
A
You
know
a
canary
cluster
or
set
of
clusters
receiving
a
portion
of
traffic
and
then
gradually
shifting
that
traffic
over
and
over
to
your
new
cluster.
So
maybe
canary
cluster
isn't
the
appropriate
term,
but
whatever
it
is,
maybe
Bluegreen
there.
There
are
a
few
different
models
that
could
work
here
in
this
bull
trust
model.
You
deploy
the
same
service
to
the
new
cluster
and
immediately
opt
in
that
seems
risky,
but
it
is
an
option
to
spin
up
a
new
cluster.
A
A
medium
trust
model
would
involve
two
pulling
the
same
service
to
a
new
cluster
with
a
max
rate
of
zero
or
whatever.
That
field
is
and
slowly
increase,
that
to
full
capacity
and
the
low
trust
model
would
be
treat
the
new
cluster
as
new
services,
new
service
version,
new
everything
and
follow
service
api's
traffic
splitting
as
defined
with
wait.
A
A
G
Yeah,
so
there's
there's
kind
of
two
modes:
one
we
allow
people
to
say
like
20
percent
of
traffic
goes
to
this
region
and
80
percent
goes
for
this
region
and
I,
say
region,
because
this
is
actually
based
on
the
topology.
Not
clusters
will
probably
die
later,
but
for
now
that's
how
it
works.
But
I,
don't
really
know
anyone
uses
this.
So
it's
not
necessary
an
example
of
what
to
do
when
most
people
are
using
or
cross
vocality
stuff,
it's
more
about
failover
so
and
we
don't
have
like
yeah.
We
don't.
C
A
A
A
A
So
this
is
also
a
place
that
we
can
comment
and
discuss
this
this
dock
in
this
approach
and
then
James
I,
don't
think
he's
here,
yet
he
may
not
make
it
so
I'll
leave
this.
I
know
we've
discussed
this
recently,
but
he
has
a
really
interesting
PR
that
moves
listeners
around
a
bit
and
notably
moves
routes
to
be
per
listener
instead
of
per
gateway.
A
G
Yeah
so
I
think
there's
two
parts.
One
is
that
we
have
this
route
selector,
that
it
doesn't
have
any
type
right.
It's
just
selecting
labels
and
since
we
can
have
all
sorts
of
different
routes,
like
you
know,
HTTP
route
to
spear
out,
maybe
the
vendor
specific
ones.
That's
it's
kind
of
confusing,
especially
if
we
talked
about
portability.
G
If
you
assume
that
you
have
your
label,
selector
foo
equals
bar
and
it's
selecting
HTTP
routes
and
TCP
routes
and
then
say
you
switch
to
East
yo
as
an
example,
and
you
have
some
virtual
service
with
those
labels.
All
of
a
sudden,
those
are
gonna
start
being
used.
When
you,
you
really
have
no
idea.
So
you
get
this
weird
behavior
and
I.
Don't
think,
there's
anything
else.
That's
doing
labels
suction
across
multiple
different
types
and
kubernetes.
G
F
D
D
A
I
think
that,
as
I
recall,
the
the
consensus
was
that
we
should
do
this
and
that
we
should
start
with
support
for
a
single
kind.
And
if
there
really
is
a
use
case
for
multiple
kinds,
we
could
eventually
transition
to
a
list
if
it
was
necessary,
but
I
think
in
most
cases
we
will
be
fine
with
targeting
a
single
kind.
G
If
you
don't
have
this
enforcement,
because
you
know
you
can
add,
like
some
label
main
equals
foo
and
you're
in
the
foo
namespace,
but
someone
else
can
add
it
to
the
label
to
their
namespace
as
well
so
yeah,
it's
I
mean
it
makes
sense
in
some
cases,
but
at
the
very
least
it
would
make
sense
to
have
like
selecting
my
name
or
something.
That's
actually
a
unique
I
saw.
You
had
that
PR
for
adding,
like
only
local
routes
or
some
similar
field,
which
makes
sense
I,
don't
know
if
we
need
more
flexibility
or
not
I.
G
A
A
There
may
have
been
other
things
that
that
ended
up
deciding
this,
but
I
think
a
key
one
was
that
a
network
policy
already
relies
heavily
on
namespace
labeling,
and
so
this
would
not
be
the
only
thing
where
your
security
would
be
compromised.
If
you
allow
just
anyone
to
play
for
databases,
so
there
is
Presidents
they're
going
customer.
A
C
C
A
So
the
idea
was
to
go
ahead
and
add
a
only
local
routes,
boolean
to
replace
the
nil
behavior
and
and
make
it
kind
of
exclusive,
with
the
allowed
route
namespaces
selector
that
this
partially
covers
it,
though
it
is
still
very
much
a
route
selector,
a
a
selector
for
the
allowed
route
namespaces,
and
the
only
thing
it
helps
is
the
use
case
of
the
local
namespace,
which
I
think
I
think
is
a
very
significant
one.
But
I
I
am
additional
feedback.
Yeah.
D
I
think
this
might
be
I,
don't
know
what,
which
sake
sake,
maybe
I
just
wonder
like
who
who
would
own
the
namespace
permissions
recommendations
problem,
because
we
can
come
up
with
something
here
and
then
clearly
non
policy
was
doing
something,
but
it
sounds
like
cost,
and
maybe
other
people
are
doing
other
things.
It
would
be
good,
I'm,
pretty
sure
there
other
resources,
not
just
networking
related
that
had
this
need
like
Rob,
should
we
sort
of
poke
around
to
see
which
yeah.
A
A
That
is
a
good
question
and
I
know
we'd.
It
does
seem
like
a
question
that
would
be
good
for,
say:
god
I,
you
know
when
we
initially
came
up
with
our
security
model.
We
ran
it
by
Jordan
and
I.
He
had
good
feedback
and
I
remember
at
the
time
it
all
seemed
reasonable,
but
it's
probably
worth
running
this
by
again
broader
siga
audience
and
specifically
about
namespace
electors,
because
I
I
agree
that
you
know
a
namespace
list
accomplish
a
similar
thing.
It.
A
G
Thing
as
well
set
the
namespace
list.
Yes,
you
are
controller
for
some
gateway
class
that
has
access
to
you
know
namespace
a
and
B.
Then
your
controller
actually
only
needs
access
to
those
namespaces,
but
otherwise
I
guess
I,
guess
listening
namespaces!
Isn't
that
high
privilege?
Maybe
so
it's
not
a
big
deal.
Yeah.
A
D
A
C
More
efficient
from
many
ways,
because
you
can
have
a
low
privilege
gateways
that
it's
you
know
you
may
have
permissions
are
about
permissions
to
watch
stuff
on
in
specific
namespaces,
so
I
don't
want
a
tenant
or
more
isolations
error
kind
of
benefits
do
not
watch
everything
in
all.
Namespaces
I
wanted
to
have
that
one
tricks
that
we
typically
use
in
histories
to
define
some
magic
levels,
both
on
workloads
and
a
lot
of
other
things.
So
basically
you
could
define
that
label
matters
underscore
namespace
or
whatever
is
reserved
and
will
match.
C
G
Actually,
that's
a
good
point:
can
you
with
our
back
today?
Can
you
say
that
you
can
list
like
if
you
say
that
you
can
only
I
guess,
nevermind
I
was
thinking
that
there's
you
want
to
be
able
to
do
it
with
our
back
permissions,
but
I
guess
you
list
the
namespaces
and
then
you
find
the
labels
that
are
permitted
in
that
you
should
have
our
back
permission
in
those
news
faces
and
then
you
list
all
the
routes
in
that
namespace.
So
I
guess
something
yes,
yeah.
A
Okay,
well,
I
can
take
an
action
item
out
of
this
to
follow
up
with,
say
god
I'm,
not
sure
I
know
they
meet
on
Thursdays,
but
I
I'm,
not
sure.
If
it's
a
it's
every
other
week,
I
don't
know
which
week
it
is,
but
I'm
am
a
just
send
something
out
their
mailing
list
to.
But
I
will
get
back
to
everyone
on
this
as
t
as
far
as
what
is
recommended,
and
also
if
it
sounds
like
there's
significant
opinions
here,
I
think
cost
and
John.
You
both
commented.