►
From YouTube: Kubernetes Community Meeting 20180607
Description
This is our public weekly meeting, for more information, check it out here: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
All
right
welcome
everybody
to
the
weekly
kubernetes
community
meeting
today
is
Thursday
June
7th
2018
I
am
your
host
and
moderator.
Today
my
name
is
Jason
Jamar's
I
am
co-lead
of
sag
release,
Sagarika
texture
and
I
work
at
Google.
So
without
further
ado,
I'd
like
to
take
you
through
the
agenda,
which
is
available
after
the
fact
at
least
/kids
community
and
we're
gonna
start
off
with
a
demo
of
a
piece
of
software
called
you
the
bite
and
Kartik
dragon
is
going
to
do
that
and
I'll
turn
this
over
to
him.
Take
it
away.
B
Thank
you
guys,
hey
everybody,
I'm
Karthik,
I'm
gonna
share
my
screen.
Gonna
jump
right
in
because
it's
10
minutes
barely
any
time,
goes
by
real
quick.
Ok,
you
good
bye
to
say
a
database,
and
we
when
we
built
this,
our
idea
was
to
bring
together
transactional
high
performance
and
planet-scale
attributes
into
a
single
database
and
we're
going
to
talk
a
little
bit
more
about
that
at
a
high
level.
If
you
think
about
all
the
databases
that
people
use
as
a
system
of
record
for
online
applications.
B
Traditionally,
people
chose
an
SQL
database
like
including
like
a
cloud
native
offering
like
Amazon
Aurora,
but
that
gives
up
on
the
planet-scale
aspect
because
it
doesn't
really
scale
out
or
it's
difficult
to
have
it
geographically,
replicated
and
present
in
different
geographies.
It
was
then
soon
augmented
with
no
sequel
databases.
No
sequel
databases
give
up
on
the
transactional
aspect.
They
don't
typically
give
you
like
a
consistent
transactions
across
multiple
keys
or
a
secondary
index,
but
they
do
offer
high
performance
and
planet-scale.
B
It
does
low
latency
and
tunable
breech,
so
it
can
it's
capable
of
performing
for
with
when
you
don't
have
conflicts
across
different
rows.
It
can
give
you
very,
very
low
latencies,
like
in
the
hundreds
of
microseconds
when
doing
key
value
reads
and
also
offers
things
such
as
read
from
your
nearest
data
center
or
a
follower,
and
can
do
good
streaming
reads
and
writes
for
high
throughput
operations
and
finally,
on
the
planet-scale
side.
It
internally
knows
how
to
replicate
and
shard
data
and
it
can
do
global
data
distribution.
B
So
it
has
synchronous
and
asynchronous
replication
capabilities
on
the
cloud
native
side.
It's
built
for
the
container
era,
so
it
works
natively
on
containers
and
it's
a
self-healing
and
fault
tolerance,
so
it
can
deal
with
failures
and
audibly
replicate
data.
And
finally,
it's
it's
kind
of
doubly
open
source
like
it's
open
sourced,
under
Apache
2.0
or
in
github,
but
it
also
implements
like
in
order
to
interact
with
the
database.
You
use
popular
known,
api's
like
which
are
open
source
databases
themselves.
B
B
It
is
not
involved
in
serving
the
IO
from
the
like
application,
but
it
does
more
of
the
background
coordination
and
faster
white
management,
such
as
figuring
out
where
the
wavier
shards
are,
or
coordinating
operations
such
as
creating
a
table
or
performing
a
backup
or
so
on
and
so
forth,
or
changing
the
schema
across
the
entire
fleet
of
machines
and
the
ybt
server,
which
actually
stores
and
serves
data.
Every
table
is
split
up
into
small
chunks
called
tablets.
B
These
tablets
are
replicated
and
they
are
hosted
by
the
ybt
servers
which
actually
do
the
I/o
with
the
critical
path.
Okay,
each
of
the
T
servers
use
a
what
what
we
called
arc
DB
store.
So
it's
a
document
based
engine,
it's
a
heavily
extended
version
of
rocks
DB
so
and
it
uses
a
raft
for
replication
across
nodes.
So
those
two
together
pretty
much
give
you
a
key
to
document,
replicated
storage
engine
and
there's
a
global
transaction
manager
on
top
that
tracks
atomicity
of
operations
across
keys.
B
Now,
if
you
think
about
how
yoga
byte
is
started
like
without
stateful
sets,
you
just
have
to
pretty
much
tell
the
Masters
about
each
other
and
start
them
up,
so
that
they
can
form
a
raft
group.
You
give
it
a
piece
of
disk
to
write
data
on,
and
you
tell
the
various
key
servers.
The
tablet
servers
coming
up
about
the
location
of
the
masters.
B
Now,
if
you
think
about
what
stateful
says,
gives
us
it's
pretty
much
the
same
thing
that
you
would
need
like
it,
so
it's
like
an
ideal
fit
for
a
stateful
set
style
application.
It
needs
order
operations
because
you
want
to
add
these
servers
and
remove
T
servers.
So
the
ordinal
numbers
are
really
useful.
B
It
needs
a
stable
network
ID,
so
you
can
discover
the
members
of
that
cluster
of
a
service
and
finally,
it
needs
a
persistent
volume
so
that
even
on
failures,
you
don't
even
want
simple
failures
like
a
pod
failure
or
a
process
failure,
you
don't
have
to
rebuild
all
the
data.
So
so
we
ended
up
building
this
on
top
of
stateful
sets,
which
works
really
well
for
us
and
I'm
gonna
go
into
showing
a
real
world
example
of
the
whole
thing
deployed
on
on
top
of
kubernetes,
so
think
of
an
e-commerce
tab.
B
Ak
ecommerce
AK,
it's
called
yoga
stone
and
the
code
for
this
is
in
github,
like
you
could
find
it
at
the
yoga
store
like
it
hub
location.
It's
built
using
gigabyte
as
the
database
Express
and
react
combination,
Express
a
node
combination
for
doing
the
rest,
API
service
and
react
as
the
UI
and
the
entire
thing
deployed
on
top
of
kubernetes
okay.
So
this
is
the
high-level
architecture
of
how
the
gigabyte
service
looks.
B
The
yoga
buy
database
itself
looks
this:
firstly,
a
master
service
which
has
a
bunch
of
parts
exposed
using
a
headless
service
because
it's
a
stateful
set
and
there
is
a
load
balancer
service
for
the
master
dashboard,
so
that
you
can
actually
view
the
UI
of
what's
happening
in
the
cluster.
There's
a
tablet
server
service
of
the
ybt
server
cell
service,
which
is
another
stateful
set,
and
it
has
a
number
of
parts
within
it
which
are
exposed
using
a
headless
service.
B
The
tablet
servers
discover
the
master
using
the
the
service
name
for
the
master
and
an
admin
as
an
admin.
You
would
be
able
to
look
at
how
the
master
looks
and
I'm
gonna
like
quickly
switch
into
and
show
you
how
this
looks
when
deployed.
This
is
the
view
of
the
master
service
like
we
are
looking
as
the
admin
user
into
the
YB
master,
UI
load,
balancer
service,
and-
and
this
is
how
it
looks.
B
B
They
have
they
host
a
number
of
tablets
and
each
of
them
enter
a
raft
group
and
they're
able
to
replicate
data
with
consistency
currently
no
range,
no
rights
going
on
in
the
cluster
right.
So
that's
the
setup
right
now
now,
when
we
introduce
our
app
the
app
discovers
the
database
through
the
tablet
servers
because
they
do
all
the
I/o
and
it
uses
the
tablet
server.
B
It
has
a
number
of
books,
it
has
some
static
categories
like
business
books,
cook
so
on
and
so
forth,
and
it
has
some
dynamic
categories
like
the
highest-rated
books,
which
can
change
quite
frequently
as
people
continue
to
rate
it,
and
you
can
go
in
on
any
one
book
and
you'll
be
able
to
see
the
details
of
that
book.
Now.
If
we
look
at
what
so,
why
you
go
by
makes
this
development
that
much
easier
you
goodbye.
B
It
is
a
multi
API
database,
so
there
you
can
use
multiple
API
is
to
interact
with
the
database,
so
the
highly
dynamic
stuff,
like
the
reviews
or
the
number
of
stars
aggregate
rating.
You
can
store
that
in
Redis
and
we
call
that
the
radius,
the
yadus
API.
You
know
trademark
so
on
and
for
the
more
static
stuff
like
a
book
title
or
a
book
description,
you
can
store
that
in
the
Cassandra
API,
because
that's
more
natural
and
it's
pretty
easy
to
connect
to
the
database
in
order
to
extract
data
from
the
respective
API.
B
So
you
behave.
It's
like
dealing
with
multiple
tables,
but
different
tables
have
different
API
I'm
gonna,
like
the
way
we
deployed
the
database
was
simply
start
up
like
a
stateful
set
and
then
deploy
the
yoga
store,
which
just
brings
up
the
deployment
expose
the
app
using
load,
balancer
and
finally,
we're
going
to
run
a
workload
and
take
a
look
at
how
it
actually
runs
some
load
and
all
of
this,
the
demo
that
you're
going
to
see
it
runs
on
gke
Google's
container
engine.
B
So
the
first
thing
is
I'm
going
to
show
the
parts
what
you
find
is.
There
are
three
master
pods
and
three
tablets
server
parts
with
one
of
the
deployment,
the
one
unit,
one
part
which
is
a
deployment
of
the
app
itself.
We
can
very
easily
connect
to
the
Cassandra
side
of
the
house
by
just
connecting
to
one
of
the
tablet.
Server
pods
for
now
and
we'll
be
able
to
like.
B
Firstly,
just
give
me
three
products
right
and
it's
going
to
like
fetch
you
three
products
but
you'll,
be
able
to
do
things
like
give
me
three
products
from
the
business
category
or
give
me
three
products
from
the
business
category
which
have
the
hardcover
type
right
and
if
you
can
go
ahead
and
make
simple
queries
like
that.
These
are
much
much
more
similar
to
your
SQL
type
coils
and,
at
the
same
time,
Simon
you'll
be
able
to
connect
to
the
Redis
client
in
the
reddest
client.
You'll.
B
Give
me
all
products
sort
them
in
a
descending
fashion
by
the
number
of
reviews,
and
also
give
me
the
total
review
score
right
or
you'll
be
able
to
do
the
same
thing
for
the
number
of
stars.
So
you
can
very
easily
persist
and
display
data
using
different
api's
and
this
version
of
Redis
actually
stores
and
scales
your
data.
So
you
don't
have
to
worry
about.
You
know
storing
it
in
a
database
and
refilling
it
on
failure
and
so
on
and
so
forth.
A
Hey
correct
and
we're
actually
in
time-
and
we
have
a
few
questions
in
the
chat
that
I
love
for
you
to
answer.
If
that's
ok,
that's
yeah,
please
yeah!
Let's
do
that
yeah
and
thank
you
for
a
great
demo.
This
is
this
has
been
great
and
if
you
can
provide
slides
to
Austin,
so
we
can
put
them
in
the
in
the
meeting
agenda
for
people
to
look
at
after
the
fact
that
SlideShare
is
good.
Yes
Josh.
Can
you
just
ask
your
questions
verbally?
Is
that
okay
yeah.
C
B
We're
working
on
a
coburn,
a
DS
operator,
but
the
reality
is
like
there's
two
parts.
The
regular
maintenance
of
the
database,
like
specifically
for
failures,
is
automated
into
the
database
itself,
so
it
doesn't
need
communities
operator
for
the
simple
operations.
But
that
said,
we
are
working
on
kubernetes
operators
to
figure
out
things
like
changing
the
replication
factor
of
the
cluster
or
like
more
advanced
operations
like
that.
So
so
yes,
we
are
but
like.
We
typically
expect
a
simple
version
to
just
work
as
it
is.
A
C
A
C
C
Anything
else
goes
on
hold.
Unfortunately,
we
were
not
able
to
complete
the
new
prowl
based
milestone
maintainer
in
time
and
so
we're
using
the
old
milestone
Munzer,
which
means
that
something
we
wanted
to
be
able
to
do
during
code
freeze,
which
is
to
take
back
branch
only
patches
for
patching
things
on
the
back
branches.
We
cannot
do
alright.
Anybody
interested
in
learning,
proud
and
concluding
two
projects
that
could
really
use
some
love.
The.
C
You
know
for
that
matter
again
because
we're
in
code
freeze
and
because
we
are
looking
to
stabilize
everything
now
please
before
you
submit
a
PR
against
111,
ask
yourself:
does
this
really
need
to
go
into
111?
You
know:
is
this
a
critical
bug
fix
for
111,
or
could
it
wait
for
112?
The
code
freeze
will
end
on
June
19th
and
barring
new
unexpected
problems.
C
C
However,
there
are
a
whole
bunch
of
things
that
went
into
111
that
are
small
patches,
not
sufficient
to
be
a
listed
feature,
but
nevertheless
having
user-facing
or
developer
facing
changes.
If
you
are
the
author
of
a
scepter
feature,
please
make
sure
that
you
got
a
dock
change
in.
We
don't
currently
have
good
tools
to
track
this
sort
of
class
of
updates,
and
so
the
release
team
is
not
really
in
a
position
to
make
sure
that
you
got
your
dock
changes
in.
C
So
it's
up
to
you
at
least
in
111,
the
one
other
thing
I
actually
mentioned.
There
is
kudos
to
SIG's
scalability
and
the
performance
team
were
actually
in
good
shape
for
performance
wise
for
111,
not
failing
any
performance
tests,
not
with
major
performance
worries
on
there.
One
of
the
things
that
contributors
will
see
is
that
we
now
have
a
new
pre
submit
and
this
pre
submit
actually
does
a
small-scale
performance
test
to
look
for
regressions
if
that
is
behaving
in
some
wonky
way.
C
For
you,
please
go
to
SIG's
scalability
to
ask
for
help
with
it
I
and
that's
a
big
step
forward
in
terms
of
making
sure
that
we
maintain
our
expected
level
of
performance,
otherwise,
ca
signals.
Looking
good
the
tests
infra
recently
fixed
gke
breakage,
to
get
the
test
passing
again
are
the
only
thing
that
are
currently
failing
our
upgrade
downgrade
tests,
and
we
know
why
and
are
fixing
it
and
I
just
want
to.
C
C
Finally,
awesome
news
because
we
are
almost
we
are
whatever.
It
is
three
weeks
away
from
one
11
release.
We
are
looking
at
forming
the
one
12
release
team
in
the
notes:
there's
a
link
to
a
PR
with
a
112
release
team
with
Tim
pepper
nominated
to
be
the
lead.
If
you
are
interested
in
participating
on
the
release
team,
please
speak
up
in
the
comments
on
that
PR
or
pop
into
cig
release
on
slack
and
express
your
interest,
and
that
is
it
for
the
release.
A
Great
Thank,
You
Josh,
so
much
really
great
information
there
and
let's
go
to
move
on
to
talk
about
caps.
So
in
case
you
don't
know
what
it
cap
is,
it's
a
way
that
we
track
value
or
enhancements
that
are
put
into
the
communities
ecosystem.
So
these
Kearney's
enhancement
proposals
are
a
way
to
just
keep
an
eye
on
how
we
track
this
state
and
status
of
these
proposals
and
how
they
evolve
over
time.
A
So
there's
some
metadata
and
so
essentially
the
things
that
will
become
new
and
communities
in
the
future
are
going
to
start
out
as
caps,
and
so
it's
a
relatively
new
thing
in
the
community
we're
trying
out
in
refining
and
working
on
that
process.
But
we
wanted
to
take
this
moment
to
expose
you
to
that
process
and
also
show
you
some
of
the
cool
things
that
are
happening
and
so
we're.
D
D
D
We
use
that
internally,
we
use
a
similar
process
for
design,
for
we
benefit
from
having
standard
templates
for
design
Docs
and
that's
kind
of
what
this
is.
It's
a
great
way
to
socialize
the
idea.
It
creates
a
history
of
designs,
so
you
can
go
back
through
all
the
caps
that
have
happened
before,
and
so
we
made
a
cap
about
a
month
ago
for
a
project
called
customized,
and
we
use
this
template
here.
There's
a
template
provided
nicely
for
how
to
write
your
cap.
D
You
got
a
incremented
number
you're
going
to
fill
out
various
X
that
our
standard,
so
that
all
the
caps
look
the
same.
It's
very
important
to
specify
your
goals
and
just
as
important
to
specify
your
non
goals.
So
our
PR
is
linked
to
from
the
document
and
it
looks
like
I
can't
get
out
of
because
mine
there
we
go.
This
was
our
PR
got
some
commentary
on
it
settlements
at
around
in
the
under
review
for
a
while
and
what
are
the
important
things
we
was
added
here?
D
It
was
the
list
of
like
many
of
the
features
and
long-standing
bugs
that
our
cap
is
intending
to
address
right.
So
then,
finally,
we
get
a
real
cap,
we
get
it
a
we
get
emerged,
and
here
it
sits,
describes
what
we're
gonna
do
with
this
project
called
customized
and
you
can
go
read
through
it
to
find
out.
You
know
why
we
want
to
do
things
generally.
The
idea
is,
we
want
to
convert
many
of
the
things
that
are
curtain
or
provide
a
declarative
way
to
do
many
things
that
have
historically
been
done.
D
So
we
have
this
thing
committed
and
the
end
result
of
this
or
the
the
current
status
of
this
process
is
you've
got
a
repo
where
customized
sits
that's
here
and
it
sits
underneath
community
SIG's,
the
the.
If
you
go
to
the
readme
for
the
project
down
at
the
bottom,
it
describes
that
the
tool
is
sponsored
by
six
CLI.
So
one
of
the
aspects
of
a
cap
is
you
typically
look
for
on
a
cig
sponsorship
right
and
the
the
readme
also
links
back
to
the
cap.
So
there's
some
notion
of
origin.
D
D
Everything
in
the
everything
in
your
configuration
looks
like
a
kubernetes
api
object.
If
you
can
type
coop
cuddle
apply
on
a
directory,
you
can
drop
a
customization
file
in
there,
which
is
yet
another
gamma
file
run
customized
on
that
file.
It
will
then
spit
out
the
same
objects
that
you
did
before,
but
with
some
sort
of
customizations
like
adding
labels,
or
you
know,
adding
annotations
and
so
forth.
D
A
A
Right
got
it
and
click
on
that
I
kept
tracking.
So
this
is
something
that
is
basically
I'm
working
on
to
help
track.
Work
like
this
and
basically
so.
People
have
an
ability
to
see
where
things
are
at
in
the
process.
The
the
backlog
right
now
has
not
been
curated
so
I'm
in
the
process
of
doing
that.
So
it'll,
probably
these
will
be
moved
across
the
board,
but
essentially
in
the
future,
as
caps
are
going
through
their
lifecycle.
This
will
help
people
quickly
just
see
where
things
are
at.
A
A
E
Good
cool,
so
this
is
gonna,
be
very
lightning
update
on
what
we're
doing
in
signal
to
cluster
who
we
are,
what
we
do
etc
and
some
links
to
how
you
can
find
out
more
detail
if
you
need
it.
So,
a
quick
introduction
to
the
cig,
we're
focused
on
solving
common
challenges
related
to
management
of
kubernetes
clusters
and
management
of
applications
that
run
inside
them.
E
Out
of
that
work,
we
also
identified
the
need
for
some
smaller,
reusable
components,
one
of
which
is
cluster
registry,
which
is
about
to
go
into
beta.
So
this
gives
you
a
place
to
register
a
bunch
of
different
clusters
so
that
the
multiple
users
can
can
be
talking
about
saying,
and
similarly,
one
of
the
bigger
challenges
of
federating
clusters
together
is
providing
unified
network
access
across
multiple
clusters.
So
one
of
the
projects
that
spawned
out
of
that
was
something
called
cube.
E
Mci
Multi
cluster
ingress,
and
this
is
a
command-line
tool
that
gives
you
the
ability
to
easily
configure
ingress
points
that
are
spread
across
multiple
clusters.
I'm
going
to
give
you
a
very
brief
update
on
what
these
three
sub
projects
are
busy
with
at
the
moment,
so
this
very
high
level
diagram
of
what
cluster
Federation
gives
you.
Essentially
you
have
a
control,
plane,
Federation
control
plane,
which
is
conceptually
similar
to
the
kubernetes
control
plane,
but
it
deals
with
multiple
clusters.
Instead
of
multiple
nodes.
This
has
a
bunch
of
api's.
E
You
can
build
CL
eyes
and
you
eyes
etcetera
on
top
of
it,
and
this
then
federates
out
essentially
API
commands
to
multiple
underlying
clusters,
which
may
or
may
not
be
on
different
cloud
providers.
So
very
you
know
if
you
replace
the
boxes
on
the
right
there
with
nodes
and
the
control
plane
in
the
middle
with
kubernetes
you've,
you
know,
got
the
right
idea
the
status,
so
we
started
off
a
couple
years
back
with
cluster
Federation,
which
has
now
become
known
as
Federation
v1.
E
This
was
used
primarily
as
a
proof-of-concept
to
gather
ideas
and
validate
some
of
the
thoughts
we
had.
It
explicitly
made
use
of
the
kubernetes
api
itself,
so
there
was
API
compatible
with
kubernetes
and
we
added
annotations
to
provide
some
of
the
additional
details
needed
to
support
multi
cluster.
It
got
to
a
point
where
it
supports
most
of
the
you
know,
commonly
used
community
starter
deployments,
replica
sets
services,
etc,
etc,
and
it's
been
used
reasonably
extensively.
Some
of
you
may
have
been
to
the
Q
con
Europe
presentation,
where
CERN
did
a
very,
very
interesting
presentation.
E
They
have
hundreds
of
clusters
that
they
use
cluster
Federation
v1
to
manage.
They
essentially
fought
the
code
and
did
their
own
thing
with
it,
and
there
are
quite
a
few
other
organizations
that
offense
the
things
we
came
to
the
not
unexpected
conclusion
that
some
of
the
things
that
we
did
in
v1
were
not
a
long-term
viable.
E
One
of
them
was
trying
to
emulate
the
kubernetes
api
and
using
annotations,
which
I
think
everyone's
now
familiar
with
the
fact
that
that
doesn't
work
over
the
long
term,
but
it
was
very
useful
for
validating
our
thoughts
and
prototyping
a
lot
of
this
work,
and
many
of
you
have
probably
seen
demos
and
presentations
at
various
conferences
of
the
work.
That's
happened
there.
We
do
not
plan
to
extend
that
any
further
or
do
much
further
work
on
v1.
E
Most
of
the
effort
is
focused
on
version
2
at
the
moment,
so
we
have
now
a
federation
specific
API.
So
these
are
like
first-class
IP
eyes
that
are
used
to
control
Federation's
of
clusters.
We
also
focused
on
we
discovered
during
the
course
of
the
one
that
there
are
many
many
different
potential
use
cases
for
this.
This
kind
of
technology
and
coming
up
with
a
standard
you
know,
shrink-wrapped
Federation
turned
out
to
be
not
necessarily
what
people
wanted.
E
What
they
wanted
to
do
was
make
a
lot
of
the
sort
of
chores,
of
managing
multiple
clusters
and
applications
across
them
easier
and
then
build
higher
level
functionality.
On
top
of
that,
so
the
v2
is
function
is
focused
very
squarely
on
decoupled,
reusable
low-level
components.
So
we've
got
things
like
template,
substitution
and
location,
affinity
and
propagation
to
multiple,
reliable
propagation
to
multiple
clusters,
and
then
we've
also
built
a
bunch
of
higher-level
API.
E
My
apologies-
if
you
didn't
get
mentioned
here
and
should
have
moving
on
to
cluster
registry
solicit
as
I
mentioned,
grew
out
of
Federation
b1.
It
turned
out
that
just
the
simple
ability
to
catalogue
a
set
of
clusters
that
multiple
users
need
to
have
access
to
know
what
their
names
are,
what
their
endpoints
are,
etc
and
be
able
to
programmatically.
E
You
know,
do
even
basic
things
like
run
a
bash
script
that
you
know
deploy
is
the
same
thing
into
multiple
clusters
turned
out
to
be
useful,
so
our
cluster
registries
sub-project
built
this
as
an
API
based
on
CR
DS
and
a
reference
implementation
which
are
going
to
go
beta.
So
the
API
is
pretty
much
being
kicked
around
and
finalized.
It's
going
to
go
beta
in
the
next
few
weeks,
the
main
contributors
there
are
Google
and
Red
Hat
and
then
multi
cluster,
ingress
and
I
must
apologize
here.
E
I've
not
been
able
to
get
the
full
current
status
of
that.
But
basically,
this
is
a
command-line
tool
that
grew
out
of
Federation
b1
and
the
need
to
simply
manage
without
a
control,
plane
or
anything
else,
fancy
to
be
able
to
manage
the
command
line
tool
ingress
points
that
that
span
multiple
clusters,
the
initial
release
of
that
works
on
Google
cloud
that
there
are
plans
to
support
multi
cloud
ingress
in
the
future
and
the
maid
code
contributed
there
is
Google
and
that
about
my
five
minutes
up.
E
A
E
That
was
quite
a
mouthful,
so
I
assumed
that
the
question
was
do
pods
know
where
they
are,
which
region
which
cluster
and
that
is,
to
a
large
extent,
kind
of
independent
of
Federation,
so
I
think
in
general.
The
answer
to
that
is
yes.
Some
of
them
will
want
to
know
that
either
by
virtue
of
labels
on
their
nodes
or
the
cluster
information
of
the
cluster
that
they
live
in,
but
that
is
not
something
that
we've
tackled
specifically
in
the
Federation.
E
E
That's
a
good
question
and
we
actually
have
used
cases
for
both,
and
people
are
probably
familiar
with
the
fact
that
we
have
multi
zone
clusters
independent
of
Federation.
In
fact,
that
was
work
that
we
did
in
the
Federation
sig
back
in
the
day
when
it
was
called
sig,
Federation
I
think
there
are
use
cases
for
both
I'm,
not
sure
that
there's
a
universally
sort
of
recommended
way
of
doing
it.
E
If
you
want
to
have
a
single
cluster
that
is
resilient
to
zone
outages
and
are
prepared
to
accept
the
performance
implications
of
running
a
cluster
across
multiple
zones,
that's
a
pretty
reasonable
approach
to
take.
If
you
want
a
higher
performance
cluster
that
doesn't
have
to
traverse
inter
zone
networking
when
running
your
applications,
but
you're
comfortable
with
the
idea
that
when
that
cluster,
when
that
single
zone
goes
away,
then
your
entire
cluster
is
dead.
That's
also
a
pretty
reasonable
approach.
E
A
F
A
F
F
It's
a
very
interesting
proposal
that
it
proposes
a
new
set
of
API
is
which
are
implemented
and
currently
out
of
core,
which
is
a
good
thing,
and
some
controllers,
and
a
way
for
plug-in
are
for
for
pods
to
specifically
request
access
to
different
facets
of
the
network
topology,
including
things
that
are
not
necessarily
IP,
based,
which
I
think
is
an
interesting
facet
of
it.
We're
trying
to
see
if
we
can
unify
that
work
with
the
multi
network
work
and
with
the
next
topic.
F
The
next
topic,
then,
would
be
device
plugins,
there's
this
class
of
networking
setups,
where
people
are
using
things
like
SR
io
V,
which
gives
you
virtual
Hardware
functions
with
art
which
are
limited
in
number
and
we're
trying
to
figure
out
exactly
how
that
maps
into
kubernetes
properly.
So
there's
been
some
proposals
and
some
demonstrations
of
how
people
have
done
it.
F
It's
an
interesting
topic,
especially
when
you
start
to
get
into
the
space
where
networking
plugins
are
actually
on
the
same
card
as
other
devices
like
GPUs
and
figuring
out
how
we're
gonna
make
those
sets
of
plug-in
api's,
which
don't
actually
know
anything
about
each
other
talk
to
each
other.
So
that's
there's
a
very
ongoing
topic.
It's
challenging
because
I
really
don't
know
how
to
solve
it.
Sort
of
at
all
good
news.
F
Core
DNS
is
GA
in
111,
which
means
it
is
available
for
anybody
who
wants
to
use
it
in
their
clusters,
and
we
think
that
it
will
actually
work.
I
think
it
is
now
the
default
in
cube,
atom
and
there's
a
separate
kept
for
making
core
DNS
be
the
default
across
all
of
the
installers,
maybe
in
the
112
cycle.
So
for
people
who
are
having
issues
with
DNS,
maybe
give
coordinates
a
try,
there's
a
lot
of
follow-on
proposals
of
things.
F
Also,
ga
in
111
was
the
IP
vsq
proxy
mode,
which
gives
people
a
much
higher
performance,
lower
programming,
latency
network
interface
for
serving
service
IPS.
So
it
we
should
largely
replace
the
IP
tables
mode.
It
is
also
not
the
default,
but
it
is
GA
in
111
and
we'll
look
making
it
the
default,
possibly
in
112.
F
F
Looking
forward.
Oh
sorry,
I,
let
me
talk
about
test
flakes.
We
still
have
some
test
flakes
around
networking.
I've
got
one
up
here
on
my
screen
right
now,
in
fact,
and
we're
trying
to
work
and
figure
those
out
people
who
want
to
contribute
I
know.
There's
some
newcomers
to
cig
Network
I
would
love
for
people
to
pull
up
the
test
grid
for
a
cig,
Network
and
help
us
figure
out
what's
flaking
and
what's
going
on
here,
because
networking
tests
are
really
hard.
F
Looking
forward
we're
still
discussing
the
evolution
of
the
ingress
API,
there's
a
proposal-
that's
been
put
out
by
some
folks
to
decompose
the
entire
API
from
a
monolithic
map
of
routes
into
individual
route
resources.
Much
more
like
OpenShift
did
in
the
first
place
it
turns
out.
Maybe
this
will
be
a
better
mapping
to
what
people
are
actually
trying
to
achieve
with
ingress
we're.
F
Also
looking
at
maybe
up
leveling
the
base
functionality
set
the
downside
there
being
that
some
cloud
load
balancers
would
no
longer
be
eligible
as
ingress
implementations
or
would
be
sort
of
reduced
scope,
ingress
implementations,
so
we're
trying
to
keep
all
options
on
the
table.
Obviously,
I
really
don't
want
to
put
cloud
providers
out
out
of
the
running
in
this
regard,
but
we
can't
stay
stuck
where
we
are
for
too
much
longer.
F
There's
also
some
proposals
out
around
evolving
the
way.
Network
plugins
work
towards
more
of
a
G
RPC
base
model
similar
to
the
rest
of
the
plugins
in
the
system
in
in
the
node
level
ecosystem,
which
I
think
is
interesting
because
it
gives
us
a
better
coupling
between
cni,
plugins
and
cube
proxy
and
the
service
implementation
and
I
think
there's
places
where
we
could
have
better
behavior,
better
performance.
F
If
those
things
were
more
closely
related
and
the
last
one
I'll
throw
out
is
ipv6
work,
the
ipv6
ipv6
support
in
kubernetes
is
supposed
to
work,
we're
still
working
on
some
CI
work.
There
we've
got
some
documentation
around
how
we
might
go
about
doing
dual
stack,
ipv4
and
ipv6.
At
the
same
time,
it's
a
fairly
substantial
API
level
change.
F
It's
gonna
have
to
teach
pods
about
having
multiple
IP
addresses,
and
so
as
that,
we're
proceeding
sort
of
cautiously
on
that,
but
it
is
on
the
table
now,
especially
now
that
we've
gotten
core
DNS
and
IP
BS
to
GA
I,
think
we've
got
some
bandwidth
to
start
looking
at
new
efforts
again,
that's
the
quick
update
of
all
the
most
of
the
major
work
items.
I
won't
say
all
because
I'm
sure
I
missed
something
and
I'll
be
happy
to
take
questions.
If
anybody
has
them.
A
There
were
some
questions
in
chat.
It
might
actually
be
better
to
just
answer
those
in
chat
kind
of
it
as
part
of
that
he'd
run
out
of
time.
Thank
you
so
much
for
the
update
that
is
great
and
2
p.m.
where
it
was
t1.
Ok,.
G
So
moving
quickly,
we
in
the
Charter
we
created
a
security
context
file
with
contact
list,
we're
working
with
st.
cloud
provider
to
transition
to
the
new
cloud
provider.
Rug
structure,
since
this
new
structure
is
brand
new.
For
now
we're
going
to
retain
ownership
of
this
as
of
sub-project
until
a
clear
process
is
established.
Fabio,
my
co-chair
has
submitted
a
PR
with
account.
G
This
PR
is
envisioned
as
a
1.12
target.
We
modeled
it
after
the
existing
OpenStack
effort.
We've
got
an
initial
build
phase
working.
Currently
the
testing
is
set
to
manual.
We've
got
some
CI
definition
configuration
put
in
place,
but
it's
not
enabled
our
test
plan
is
to
engage
in
the
112
timeframe.
This
vSphere
infrastructure
for
CI
testing.
G
We
had
a
long
discussion
on
this
in
the
last
safe
meeting.
So
if
anybody's
interested
in
details
of
or
looking
at
there
I'd
encourage
you
to
go.
Look
at
the
zoo,
video
recording
post
on
YouTube.
There
were
possibly
looking
at
creating
a
working
group
under
the
VMware
sake
to
host
this
CI
testing
effort.
A
Great
Steve,
thank
you
so
much
looking
forward
to
great
more
work
from
the
signal
so
without
further
ado,
we're
going
to
move
on
to
announcements.
First
part
of
announcements
is
a
section
we
call
shoutouts
shoutouts
are
actually
sourced
from
the
slack
Channel,
the
Q&A
section.
That
is
also
named
shoutouts.
A
So
as
you
traverse
the
community
and
the
things
that
you
do
within
the
find
people
who
are
very
helpful
and
and
do
things
that
you
want
to
recognize
and
have
them
called
out
here,
just
put
their
name
or
handle
in
there,
and
we
will
do
that
so
I.
First
one
comes
about
Jennifer
Rondo
who's
been
working
on
the
weekend
to
get
the
111
Docs
builds
working
again,
Thank
You
Jennifer
for
doing
that.
A
A
Cannot
that
cannot
possibly
say
how
much
I
appreciate
Kristoff,
because
there
are
no
words
sufficient
for
everything
he
does
another
shout
out
to
neo
lift
1
2
3,
who
is
also
known
as
Lumira,
even
up
for
really
stepping
up
lately
to
help
with
user
facing
issues
on
the
cuvette,
TM
111
release
and
his
contributions
to
the
sake.
So
those
are
all
great
shout
outs
to
people
doing
fantastic
work.
A
A
So
please
sign
up
for
the
release
team
and,
if
you're
interested
in
shadow
and
we'll
be
more
than
that
happy
to
have
you
participate
and
learn
along
the
way,
so
I
believe
that's
it
for
now.
I
am
very
grateful
for
everybody
in
this
meeting
your
time
and
attendance
and
patience
and
we're
gonna
go
ahead
and
wrap
it
up.
So
thanks.
Everybody
have
a
great
rest
your
day
and
happy
Thursday
thanks
James
for
driving
yeah.