►
From YouTube: Kubernetes Community Meeting 20200917
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
Like what you see here? Continue the conversation on https://discuss.kubernetes.io
A
Hello,
wherever
you
are
joining
us
for
the
september
kubernetes
community
meeting,
my
name
is
eddie
zaneski
and
I
will
be
your
host
today.
I'm
a
developer
advocate
over
at
amazon
web
services
and
I
work
mostly
under
6
cli.
A
So
please
don't
say
anything
you
don't
want
publicly
on
the
internet
also
be
reminded
that
we
have
a
code
of
conduct,
so
please
be
excellent
to
each
other,
and
with
that,
if
you
are
not
speaking,
would
you
kindly
mute
yourselves?
A
Wonderful,
so,
first
up
we
have
a
demo
by
jordan.
Leggett
he's
gonna
show
off
some
really
cool,
helpful
warnings
that
have
landed
in
the
recent
release.
So
please,
jordan,
take
it
away.
C
All
right
can
everybody
see
my
terminal
screens?
Yes
all
right.
So
if
you
have
been
using
kubernetes
or
developing
kubernetes
over
the
past
several
years,
you
may
have
noticed
that
there's
a
lot
of
stuff
going
on
and
it's
kind
of
hard
to
keep
up
with
it
all
and
until
recently
the
only
mechanisms
we
had
for
communicating
information
to
our
users
in
our
developer
community
was
things
like
release,
notes
or
email
announcements
or
slack
announcements.
Things
like
that
and
that
those
are
fine.
C
But
our
release
notes
tend
to
be
pretty
long
and
not
everybody
reads
all
30
40
pages
of
them
as
thoroughly
as
some
of
us
do,
and
the
people
who
are
consuming
kubernetes
releases-
oh
josh,
said
also
last
week
in
kubernetes
development.
C
Yes
subscribe
to
that,
if
you,
if
you
don't
already,
but
but
not
everyone,
who's
using
kubernetes
or
developing
for
communities,
is
always
reading
all
of
those
channels,
and
so
we
wanted
to
come
up
with
a
way
to
communicate
information
to
users
at
a
point
where
it
would
be
useful
and
relevant
to
them.
And
so,
as
an
example,
I
have
here
a
manifest
file
that
has
you
know
some
api
objects.
C
So
what
we
added
in
119
is
the
ability
for
the
server
to
return
warnings
to
users,
and
so,
if
I
apply
this
manifest
full
of
e1
beta
1
objects
starting
in
118.
You
will
get
warnings
like
this.
That
tell
you
super
useful
things
like
this.
Api
was
deprecated
in
117
and
it
is
planned
to
be
made
unavailable
in
122
and
what
version
should
you
use
instead
and
the
way
this
is
done
is
via
the
server
sending
back
headers
to
the
client,
so
old
clients
who
don't
know
about
the
headers
aren't
affected
at
all.
C
It
doesn't
change
the
status
code
of
the
response.
It
doesn't
change
the
api
body
of
their
response,
so
old
clients
aren't
affected
by
these,
but
it
lets
new
clients,
keep
control
and
client
go
and
other
clients
who
want
to
pay
attention
to
these.
Have
this
information
from
the
server
in
a
compatible
way?
C
So
that's
that's
really
great
it
the
way
the
client
intercepts.
These
is
done
at
the
client
level
like
when
it's
reading
the
responses
from
the
server,
and
so
it
works
for
any
queue
control
command.
If
I
do
a
get
of
those
dedicated
apis,
I
get
the
same
warnings.
If
I
try
to
annotate
them,
I'm
writing
to
those.
I
get
the
same
warnings.
C
C
So
this
is
great
in
119
we're
using
this
to
communicate
information
about
deprecated
apis
specifically,
but
in
future
releases
we
can
use
this
same
mechanism
to
communicate
warnings
about
other
things,
things
like
deprecated
labels,
if
the,
if
you're
still
using
the
beta
os
and
architecture
labels
to
target
nodes,
those
are
deprecated.
The
ga
versions
have
been
available
for
several
releases,
and
so
we
can
start
to
detect
and
communicate
information
about
that.
C
We
can
detect
and
communicate
warnings
about
problematic
configurations
like
if
you're,
creating
a
pod
that
requests
four
millibytes
of
memory,
because
you
didn't
recognize
the
the
lowercase
m
or
the
uppercase
and
was
significant
and
you're
actually
requesting
an
impossible
to
run
amount
of
memory.
We
can
warn
you
about
that
now
in
a
way
that
is
backwards,
compatible
and
won't
break
old
clients.
C
So
that's
that's.
The
first
thing
I
wanted
to
show.
The
second
thing
is
something
that's
administrator
facing,
so
this
is
great
for
communicating
warnings
to
users
at
the
point
when
they
are
using
the
derivative
things.
But
if
I'm
responsible
for
running
this
cluster,
I
might
not
be
the
one
running
these
two
control
commands
and
being
exposed
to
these
warnings,
and
so
it
would
be
really
helpful.
For
me,
as
the
administrator
to
know,
is
this
cluster
safe
to
upgrade.
C
Are
people
still
making
use
of
these
deprecated
things,
and
so
we
added
metrics
to
make
that
information
available
to
the
cluster
administrator.
And
so
let
me
run
that
again,
so
you
can
see
what
is
happening.
So
this
is
just
scraping
the
metrics
endpoint
from
the
api
server
and
running
it
through
something
to
transform
that
into
json
and
then
running
a
jq
query.
C
So
this
is
the
metric
that
is
newly
published
in
119,
and
so
an
administrator
could
run
that
and
see
which
deprecated
apis
have
been
requested
in
this
instance
and
what
release
they're
going
to
be
removed
in,
and
you
could
even
join
that
information
to
the
metric.
That
gives
you
details
about
what
types
of
requests
are
being
made.
So
if
you
take
that
deprecated
metric
and
then
join
it
to
the
request
information
metric,
you
can
see
details.
So
you
can
see.
C
So
administrators
can
use
this
to
recognize
when
it
when
it's
safe,
to
upgrade
a
cluster
if
no
deprecated
apis
that
are
going
to
be
removed
and
the
next
version
have
been
requested,
then
it's
safe
to
upgrade
and
then
the
last
bit
is
if
they
detect
that
requests
have
been
made
to
these
apis.
In
the
audit
logs,
there
is
now
an
annotation,
that's
added
to
request
requests
that
are
made
to
deprecated
apis.
So
assume
I
see
someone
is
writing
to
this
deprecated
rpac
api.
C
A
C
A
C
The
output
is
the
standard
error.
We
already
write.
Things
to
standard
error
and
standard
out
is
used
for
writing
api
output,
so
the
warnings
always
go
to
standard
error.
C
A
Them
bella,
thank
you
so
much
next
up,
we
have
jeremy,
who
I
think,
just
joined
the
call.
Jeremy
rickard
is
going
to
give
us
a
update
as
the
1.20
release
lead.
C
Hey
everybody,
sorry
for
the
delay
in
joining
as
eddie
just
mentioned,
I'm
going
to
give
you
a
little
brief
update
on
the
120
release.
We
officially
kicked
off
the
release
this
week,
so
this
is
week
one
per
the
schedule.
You
can
of
course
find
that
in
the
sig
release,
kubernetes
repo
under
the
120
folder
in
the
releases
tree
of
that
kind
of
directory
structure,
some
kind
of
big
takeaway
dates
for
this
enhancements
freeze
will
be
the
first
milestone.
That's
coming
up,
that'll
be
october
6th.
C
After
that
we'll
have
code
freeze,
which
will
be
week
nine,
that's
november,
12th
thanks
bob
for
dropping
the
link
into
the
chat
here.
It's
kate's
dot,
dev
release
kubecon
will
come
up
right
after
that,
so
code
freeze
will
happen
and
then
cubecon
will
occur
like
right
after
that.
So
we
have
a
little
bit
of
like
probably
the
two
week
period
where
kubecon's
happening
holidays
happen
in
the
us
and
some
other
places.
So
we've
pushed
the
docs
deadline
out
a
little
bit.
C
This
one
will
be
13
weeks
because
of
that
kind
of
period
of
cube,
con
and
and
holidays,
so
the
doc
said
line
will
be
november
30th
and
then,
on
december
8th
we
will
plan
tentatively
to
do
the
kubernetes
120
release
with
the
retro
following
the
week
after
one
other
note,
shadow
selection
is
pretty
much
done,
we're
making
some
final
just
going
through
and
vetting
the
choices
that
the
leads
have
made.
A
Awesome
thank
you
for
the
update,
jeremy
and
jumping
right
into
the
hot
seat,
so
cheers
to
you
up.
Next,
we
have
mark
rosetti
and
michael
michael
to
give
an
update
on
sig
windows.
E
I
have
an
ultra
wide
screen
and
it
tends
to
skew
things.
Oh.
F
Okay-
and
I
don't
know
what
it
is-
no
worries
all
right,
hello,
everybody!
It's
been
a
few
months
since
sick
windows,
I
came
to
provide
you
guys,
an
update
and
we've
actually
been
hard
at
work.
We've
been
doing
a
ton
of
things
in
sick
windows,
to
kind
of
advance,
the
vision
of
windows
developers
and
in
kubernetes,
and
enabling
them
to
build
cloud
native
apps
and
host
them
on
this
wonderful
platform
that
we're
all
operating
on.
F
E
Yeah
hi,
I'm
an
engineer
at
microsoft,
I'm
in
the
azure
org.
F
E
Sure
I'll
take
this
one,
so
this
is
a
slide
focusing
on
a
lot
of
the
work
that
we've
done
in
container
d
in
the
last
couple
of
releases,
there's
been
a
lot
of
kind
of
direction
in
the
community
in
the
kubernetes
community.
To
align
with
the
cri
apis
and
container
d
is
the
first
windows
implementation
of
that
that
we've
got
with
it,
we're
enabling
a
lot
of
new
user
scenarios.
E
A
lot
of
them
are
I'll.
Go
over
some
highlights
here.
So
some
of
the
first,
the
big
ones,
are
there's
a
lot
of
new
cni
functionality
and
plugins
that
are
better
supported
with
container
d.
E
Continuity
is
a
more
robust
and
less
monolithic
architecture,
and
also
the
design
and
maintenance
is
completely
community,
driven
specifically
for
windows,
there's
a
number
of
features
that
are
enabled
with
container
d
that
aren't
possible
today
with
docker
that
users
have
been
asking
for,
including
the
ability
to
map
single
files
into
containers,
the
support
of
graceful
shutdown
and
termination
grace
periods
that
one
we
get
a
lot
of
masks
around
true
name
spacing
for
containers
and
workloads,
and
also
like
I
mentioned
the
above,
just
the
better
workflows
around
cni's
at
the
kubecon
eu
2020.
E
There
was
a
talk
by
muzz
and
michael
about
a
lot
of
the
architectural
details
around
how
container
runtimes
work
for
windows
which
I've
linked
here
and,
if
you're
interested
in
that.
I
encourage
you
to
check
that
out.
Next
steps
are
we're
working
towards
hyper-v,
isolated
containers,
which
is
a
way
of
supporting
secure
multi-tenancy
for
windows,
containers
and
also
gmsa
support
for
enterprise
users.
F
Thank
you
mark,
so
let's
talk
a
little
bit
about
csi
proxy,
so
one
of
the
huge
advantages
of
our
cloud
native
ecosystem
is
that
the
storage
providers
and
the
storage
vendors
can
come
in
and
provide
support
for
enabling
containers
to
utilize
advanced
storage
capabilities
on
windows.
This
is
something
that
was
lagging
and
part
of.
It
was
because
of
limitations
of
the
platform,
and
part
of
it
was
because
we
brought
windows
laid
into
the
ecosystem
of
kubernetes.
F
So
we've
started
on
an
endeavor
of
enabling
csi
for
windows
and
the
csi
proxy
work
went
to
beta
with
119.
It
was
a
tremendous
achievement
from
our
team
and
essentially
the
way
we
did.
It
is
that
we
used
the
proxy
that's
running
at
the
host
operating
system,
again
a
windows
node
here
and
that's
enabling
to
bypass
some
of
the
privileged
container
limitations.
F
That
windows
has
so
the
bulk
of
the
work
there
was
in
the
csi
proxy
component,
and
you
know-
and
essentially
you
got
a
native
windows
service
as
well
as
a
set
of
v1
beta
1
apis
that
support
disk
volume,
smb
and
file
system
operations
with
the
first
beta
release.
That's
out
now
we're
supporting
azure
disk
and
azure
file
as
a
csi
drivers
and
gc
persistent
disk.
Now,
you're
gonna
see
there's
a
couple
of
huge
commitments
here
right
how
about
aws?
How
about
this
year?
F
Will
we've
just
launched
work
streams,
so
we
can
support
those
csi
providers
in
the
1.20
release,
we're
also
going
to
be
adding
full
ci
cd,
so
we
can
actually
catch
issues
early
on
and
enable
us
to
fully
vet
all
of
these
different
csi
providers
of
support,
and
the
most
important
thing
is
that
there
is
work
coming
up
on
privileged
containers
that
we're
going
to
talk
about
later
on
in
this
slide,
so
we're
going
to
try
to
understand
the
impact
of
that
to
csi
proxy,
and
how
can
we
enable
this
model
to
work
better
with
advancements
in
windows
are
happening
next
slide,
please,
on
the
networking
front,
so
we
talked
about
compute
talked
about
storage,
networking
always
comes
next.
F
A
lot
of
advancements
have
happened
in
networking
as
well,
who
are
the
support
for
direct
server
return
that
enables
you
to
scale
to
large
number
of
services
efficiently.
What
the
support
for
endpoint
slices,
just
like
linux
containers
code,
endpoint
slices
in
the
last
couple
of
releases.
So
now
you
can
support
services
with
a
large
number
of
endpoints
we've
added
session
affinity
like
sticky
sessions,
destination
preservation,
a
local
routed,
veep,
dual
stack,
ipv6
support,
so
you
can
see
as
more
and
more
advancements
are
happening
in
networking.
F
We
want
to
make
sure
that
the
windows
developers
get
to
take
advantage
of
them
as
well.
On
the
cni
front,
you
may
have
noticed
the
latest
announcements
from
caligo.
Now
it
has
an
open
source
release
for
windows
prior
to
that
it
was
a
part
of
their
commercial
distribution.
There,
the
tiger
essentials
package,
now
it's
open
source
and
then
the
andreas
cni,
also
that's
built
on
top
of
the
ovh.
Sorry
as
well
on
top
of
ovs,
also
has
support
for
windows,
including
network
policies.
F
So
now
we
have
two
almost
enterprise-grade
cni's
that
are
supported
on
windows,
including
network
policies.
So
you
know
lots
of
advancements
there,
like
we
as
a
community,
are
super
happy
for
for
these
enhancements.
F
What's
next
for
us
on
the
networking
front,
we're
going
to
promote
dsr
to
stable,
there's
some
work:
don't
need
to
do
around
q,
proxy
local
traffic
management,
including
testing
with
more
and
more
load
balancers.
So
we
can
support
them
next
slide.
Please
I
want
to
put
the
technology
matrix
out
there.
F
We
also
have
documented
this
in
the
kubernetes
docs,
but
you
get
asked
you
know
what
what's
kind
of
your
plan
here
and
our
plan
is
a
as
a
cig
is
to
always
support
the
last
long-term
servicing
channel
that
microsoft
has-
and
in
this
case
it's
windows,
server,
2019,
so
you're
going
to
see
that
we've
been
supporting
that
since
1.14
and
then
we
also
want
to
support
the
latest
two
sacs
that
stands
for
semi-annual
channel
releases
of
microsoft.
F
These
are
the
releases
that
are
only
supported
for
a
short
period
of
time,
usually
18
months,
and
they
are.
You
know
the
commitment
from
the
customers
that
they're
going
to
keep
updating
this
their
their
bills
of
windows
on
a
rapid
face
to
keep
with
the
innovation.
That's
happening
in
this
space.
So
right
now,
with
version
119
we're
going
to
support
1909.2004..
F
So
essentially,
the
last
two
semiannual
channels
and
on
the
bottom
you're
going
to
see
on
the
container
d,
I
want
to
go
ga
with
hyper
v,
hopefully
at
1
to
20..
That's
our
goal:
we're
going
to
support
version
1.5
of
container
d
and
again
the
same
three
releases
of
windows,
2019,
1909
and
2004
next
slide.
Please!
F
F
It
could
have
been
four,
if
we
add
the
networking
here,
but
the
compute
side,
we're
gonna,
keep
investing
cri
container
d
and
you're
gonna
see
the
caps
there
in
a
second,
I
want
to
add
hyper-v
isolation,
support,
gpu,
support,
privilege,
containers,
that's
going
to
bring
us
closer
to
party,
doing
linux
and
also
address
some
of
the
key
gaps
that
we
have
and
users
or
customers
have
been
requesting
for
us
on
the
deployment
lifecycle
management
really
huge.
You
want
to
get
cube
adm
to
stable
that
might
be
dependent
on
privileged
containers,
but
we're
working
on
it.
F
I'm
going
to
start
introducing
cluster
api
support
fairly,
the
focus
there
will
start
with
azure
and
this
sphere,
and
then
it
will
move
to
other
cluster
api
providers
as
well,
but
that's
something
that's
near
and
dear
to
our
to
our
heart.
We
wouldn't
enable
users
to
have
an
easier
way
to
deploy
kubernetes
clusters
with
windows
and
on
the
storage
front
you
want
to
promote
csi
work
to
stable
one
or
more
storage
providers.
We're
going
to
enable
you
to
do
bug
recovery
with
valero
support
with
csi
snapshots.
That's
actually
a
big
thing
of
csi.
F
E
E
Yeah
so
we've
already
kind
of
alluded
to
privileged
container
support
for
windows
coming
up.
We
wanted
to
highlight
this
again
here,
because
that
has
could
have
so
many
implications
across
so
many
different
areas
of
kubernetes.
E
Our
plans
are
to
introduce
privileged
container
support
with
120,
depending
on
approval
of
the
cap
and
the
enhancement.
E
The
windows
container
platform
team
has
a
demo
video
of
privileged
containers
functionality
with
container
d
today,
there's
a
link
to
that
video
to
check
out
and
yeah
privileged
containers
could
have
so
like
support
or
enable
so
many
different
scenarios,
including
many
cluster
api
scenarios
and
network
configuration
the
csi
plugins,
as
we
alluded
to
logging
daemons
and
just
making
it
easier
to
manage
the
windows,
server
nodes
themselves
and
many
more
here's
a
little
bit
of
information
about
how
we
wanted
to
share
with
other
sigs.
E
So
sig
windows
is
kind
of
very
our
cross-cutting
sig.
We
have,
as
michael
mentioned,
we
kind
of
have
our
foot
in
like
storage,
compute,
networking
and
everything,
and
we
wanted
to
use
this
time
to
reiterate
to
the
community
that
please,
if
you
are
reviewing
caps
or
even
just
code
reviews-
and
you
think
that
this
could
impact
windows,
please
feel
free
to
reach
out
to
us
we're
happy
to
take
a
look
and
because
we
want
to
maintain
windows,
support
and
keep
that
healthy
in
the
community.
E
There's
contact
information
coming
later
and
again,
we
wanted
to
just
highlight
the
privileged
containers
and
some
of
the
support
that
we
think
we're
going
to
need
to
help
land.
This
particularly
sigoth
and
signode
are
probably
going
to
help
need
to
help
us
review
and
find
some
of
these
changes
we've
reached
out
to
both
of
those
communities
already
in
their
community
meetings.
But
we
wanted
to
also
open
up
the
floor
for
anybody
to
bring
in
scenarios
that
think
could
benefit
from
windows
privileged
containers.
E
Here's
a
quick
summary
of
and
links
to,
the
caps
that
we
currently
are
working
on.
We've
covered
most
of
these.
It's
continuing
to
drive
cri
continuity,
support
to
stable
support
for
windows,
privilege
containers,
a
cluster
api
proposal,
enhancement
proposal
for
the
windows,
node
support
and
cluster
api,
and
then
cuba,
adm
for
windows.
E
And
here's
a
couple
of
resources
for
how
folks
interested
in
contributing
can
help
or
start
contributing.
We've
got
a
slack
channel,
sig
windows
where
anyone's
free
to
post
on
we
have
weekly
community
meetings
at
10,
30
p.m.
Eastern
time,
every
tuesday,
with
the
full
backlog
of
the
recordings
available
to
watch,
we
have
a
product
project
board
that
we're
maintaining
with
issues
that
people
can
come
pick
up
or
that's
also
a
good
way
to
raise
issues
with
the
sake.
E
If
you
don't
feel
comfortable
or
don't
have
time
to
drop
a
line
in
the
psych
channel,
we're
looking
for
pr
reviewers
and
also
asking
other
sigs
that,
where
there's
where
we
need
approval
from
to
help
us
land
some
of
those
prs,
you
can
help
by
reviewing
our
e
to
e
test
case
failures
and
even
better
if
you
can
help
troubleshoot
some
of
them
and,
as
I
mentioned
before,
help
us
either
write
additional
documentation
or
user
stories
and,
just
in
general,
consider
what
windows
support
would
look
like
in
your
features.
F
Yeah,
essentially,
it's
not
just
code
that
you're
looking
for,
if
you
can
do
other
things,
write
dogs
review
test
case
failures
come
on.
We,
we
have
a
lot
of
work
for
additional
community
members
to
come
in
and
contribute
yeah
I'll.
F
One
yeah
and
in
our
last
slide
you
know
this
is
where
this
is
how
you
can
find
us.
We
have
documentation,
we
have
our
slack
channel
a
mailing
list,
we're
on
github
a
youtube
playlist
with
all
the
recordings
going
back
three
plus
years
and
then
our
community
meetings.
So
you
want
to
know
how
to
find
us
we're
on
slack
github
and
and
all
these
things.
Thank
you
all.
We
appreciate
it.
A
Thank
you
mark
and
michael
for
the
update
on
sig
windows.
I
just
performed
a
live
migration.
My
power
came
back
and
I
seamlessly
transitioned
back
to
plugged
in
wi-fi
wire,
ethernet
and
powers,
I'm
feeling
great.
Next
up.
We
have
sig
multi-cluster
with
jeremy
olmsted
thompson
and
paul
mori.
G
All
right
thanks
everyone.
Let
me
just
share
my
screen
here.
G
Hey
everyone
all
right,
so
I'm
jeremy
olmsted
thompson.
I
work
on
gke
at
google
and
paul.
G
Awesome
and
and
we're
the
leads
for
sick
multi-cluster.
So
let
me
kind
of
run
you
through
what
we've
been
working
on
here.
So
we'll
start
with
what
we
did
last
cycle.
I
think
some
of
the
big
things
some
of
you
may
have
seen.
We
gathered
consensus
from
all
of
you
on
what
we
should
actually
call
the
group
of
clusters
that
work
together
that
as
sigmulti
cluster,
we
are
first
concerned
with
and
and
we've
come
up
with
cluster
set.
G
So
thank
you
to
everyone
who
who
participated
in
our
survey
and
also
thanks
to
josh
for
putting
the
survey
together
and
getting
that
out.
There
we've
put
together
some
basic
guidelines
for
multi-cluster
deployments,
starting
with
namespace
sameness.
Basically,
the
concept
that,
within
this
cluster
set
a
name
space
should
have
a
consistent
meaning
and
consistent
ownership.
G
Namespace
foo
shouldn't
be
used
for
one
thing
owned
by
one
group
in
one
cluster
and
by
a
completely
different
group
and
used
for
something
completely
different
in
another
cluster.
We've
continued
to
make
good
progress
on
cube,
fed
and
we'll
kind
of
talk
about
that
in
a
minute,
and
we've
taken
this
multi-cluster
service,
apis
concept
to
alpha
release
and
a
six
repo.
I've
got
a
link
here
and
we'll
go
a
little
bit
more
into
that
in
a
second
as
well.
G
So
let's
talk
about
what
we're
doing
now,
so
I
think
the
first
thing
on
our
mind,
is
to
further
define
and
figure
out
what
we
actually
want
to
do
with
cluster
sets.
Now
that
we
have
this
definition,
we're
starting
to
have
this.
This
concept
of
of
you
know
best
practices
around
that.
What
does
it
actually
mean
to
be
a
part
of
of
a
cluster
set?
What
does
that
mean
in
terms
of
membership?
G
Do
we
need
to
talk
about
what
what
it
means
to
have
a
registration
api,
or
something
like
that,
and-
and
we
want
to
continue
to
refine
the
concept
of
cluster
identity,
we're
also
looking
at
how
can
we
make
it
easier
to
actually
place
work
in
those
clusters,
and
we've
got
a
little
bit
more
information
on
that
shortly?
We've
got
some
new
work
in
cube
fed.
G
That's
that's
interesting
and
we're
going
to
be
looking
at
how
we
can
take
multi-cluster
service
apis
from
alpha
where
they
are
now
through
beta
and
then,
of
course,
eventually
to
ga
so
getting
into
the
details
of
each
project.
Here,
let's
talk
about
cubesat
first
paul.
H
H
What
does
pull
reconciliation
mean
good
question,
I'm
glad
you
asked
the
so
there's
there's
two
basic
models
when,
when
we
say
pull
the
other
one
is
push
and
push
is
what
cube
fed
does
today,
push
means
that
you
have
something
that
is
outside
the
cluster,
that's
being
programmed,
that
is
making
a
client
connection
to
the
cluster
being
programmed
and
pushing
things
to
it.
Pull
would
mean.
H
Maybe
there
is
a
some
sort
of
agent
or
reconciler
running
in
the
cluster
being
programmed
that's
watching
an
api
surface
external
to
that
cluster,
pulling
in
via
a
watch
pulling
in
information
about
what
it
needs
to
do
within
that
cluster
and
that's
the
push
mo
I'm
sorry,
that's
the
poll.
Reconciliation
model
like
folks
working
on
cube
fed
are
considering
now.
So
I
I
think
this
is
your
chance
if
you're
interested
in
that
to
to
get
involved
in
effect,
how
that
work
plays
out.
H
I
also
want
to
give
a
shout
out
to
jimmy
hector
and
others
from
d2
iq
for
helping
to
move
the
project
forward.
They're
doing
regular
releases
in
cube
fed-
and
I
just
want
to
say
thanks-
go
ahead.
Jeremy.
G
Awesome
thanks
paul,
so
the
next
thing,
of
course
I
want
to
dig
into
is
the
multi-cluster
service
apis
that
I
mentioned
so
we've
been
working
on
this
kept
for
a
while,
as
a
group,
please
check
it
out.
We've
got
our
alpha
release
at
up
on
the
sigs
repo,
but
what's
really
interesting
is
kind
of
what's
next,
so
we've
got
the
base,
we've
kind
of
outlined
the
basis
of
what
a
multi-cluster
service
api
actually
looks
like,
and
we
took
the
approach
of
going
api
first.
G
So
this
is,
you
know:
we've
defined
what
a
consistent
experience
should
look
like,
but
we've
left
it
open
for
implementations.
So
we
haven't,
we
don't
have
a
canonical
implementation
and
we're
we're
actually
sourcing
a
few
and-
and
it's
awesome-
we've
seen
a
lot
of
community
involvement
there
already.
There
are
actually
a
couple
different
implementations
in
flight,
which
is
great
and
so
we're
looking
to
kind
of
grow
those
as
well,
but
before
we
can
go
to
take
this
to
beta,
we
have
to.
G
We
have
to
figure
out
a
few
things
first,
so
I
think
the
big
thing
is
multi-cluster
dns,
and
what
what
dns
actually
looks
like
for
services
expanded
to
multi-cluster
space?
We
have
some
basic
assumptions
outlined
in
the
kep
that
we've
kind
of
figured
out
how
how
do
we
address
headless
services,
for
example
to
start,
but
this
needs
to
be
fleshed
out
and
and
needs
a
lot
more
detail,
and
we
need
to
really
solidify
what
that
looks
like
before.
We
feel
comfortable
with
beta
network
policy
is
a
big
piece.
G
What
does
network
policy
actually
mean
when
you,
when
you
cross
the
cluster
boundary?
We
need
to
figure
out
how
we
want
that
to
apply,
and
and
how
can
we
practically
implement
that
you
know
dealing
with
the
fact
that
there's
a
whole
bunch
of
native
assumptions
on
you
know
local
watchers,
on
pods,
for
example?
How
do
we
make
that
sane
at
the
api
level
across
clusters,
and
then
one
of
the
things
that's
evolved
out
of
looking
at
dns
in
particular
for
multi-cluster
services,
is
the
need
for
a
consistent
cluster
id
we've.
G
We've
discovered
certain
characteristics
that
a
cluster
id
needs
to
have
like.
It
needs
to
be
a
valid
dns
label,
for
example,
for
for
mcs
in
the
current
form
to
work.
But
what
else
does
it
need?
And
you
know
what
are
the
other
characteristics
like
uniqueness
things
like
that,
so
we're
gonna
continue
to
kind
of
refine
that
over
the
next
few
months
and
then
back
to
you
paul
talk
about
the
work
api.
H
Sure
so
what
work
api
refers
to
is
is
sort
of
a
different
api
regime
from
the
one
that
folks
may
be
familiar
with
from
federation
and
cube
fed
v2,
where
that
the
kind
of
federation
pattern
is
to
have
individual
resources
that
correspond
to
individual
resources
being
programmed
in
in
in
clusters
that
they're
supposed
to
be
scheduled
to
the,
I
would
say
it's
almost
the
opposite
in
certain
ways
when
we
think
about
the
work
api,
so
imagine
that
we
have
call
it
three
resources
that
we
want
to
distribute
to
some
clusters
in
a
cluster
set.
H
The
the
federation
type
of
pattern
would
be.
You
create
some
version
of
those
three
resources
as
three
distinct
resources
in
the
in
the
hosting
cluster
that
that
cube,
fed
is
deployed
in
and
each
of
those
individual
resources
contains
scheduling,
information
about
where
they're
supposed
to
go.
H
So,
in
contrast,
this
with
work
api,
which
is
about-
and
one
thing
I'll
just
mention
when,
when
we
talk
about
this,
please
bear
in
mind,
we
have
not
sort
of
reached
the
conceptual
point
where
we're
we're
talking
about
scheduling,
yet
we're
at
an
even
more
elementary
point.
H
I
would
say
of
defining
a
an
api
that
can
characterize
within
a
single
resource
that
that
group
of
three
resources
in
our
example,
so
a
work,
could
contain
those
three
resources
and
reflect
status
information
about
whether
they
have
been
programmed
and
applied
to
a
particular
cluster.
H
You
may
sense
the
working
backwards
pattern
being
present
in
this
work
too.
So
the
this
is
in
a
stage
where
we've
done
some
prototyping
there's
a
link
that
you
can
click
in
the
the
slides
that
jeremy's
gonna
share.
H
There's
an
initial
keft
in
sorry
kepp
in
draft
form,
and
I
want
to
shout
out
to
valerie,
lancy
and
trijin
for
driving.
This
thank.
G
You
so
what
does
this
mean
for
all
of
you?
Well,
I
think
the
biggest
thing
is
we
need
your
input.
So
what
else
should
we
be
thinking
about?
You
know
in
this
in
this
space
of
a
cluster
set,
and
what
should
our
next
priorities
be?
I
think
the
a
good
place
to
start
is
thinking
about
the
current
projects
that
you're
working
on
and
your
sigs
and
working
groups.
What
happens
to
those
projects
across
the
cluster
boundary?
What
do
we
need
to
be
thinking
about?
G
What
can
you
think
about
that
might
make
your
project
more
useful
in
the
multi-cluster
space?
What
have
you
built
that
that
works
for
you?
You
know
well
working
with
kubernetes.
You
know
you've,
probably
spun
up
a
few
clusters
before
what
have
you
built
that
actually
links
them
together?
What
can
you
share
with
us?
What
should
we
be
thinking
about?
What
can
we
learn
from
your
experience
and
then
calling
out
in
particular,
I
think,
sig
network?
G
G
So
how
can
you
give
us
your
input?
Well,
come
to
our
meetups
tuesdays
from
9
30
to
10,
20
pacific.
Add
your
topics
to
the
agenda
reach
out
on
slack
or
in
the
forum.
We'd
love
your
your
feedback.
Over
the
last
you
know,
year
we've
had
some
increasingly
interesting
conversations.
We've
you
know
things
have
been
happening
and
we'd
love
you
to
be
involved.
G
A
B
Okay,
you
should
all
see
the
slide
deck
now
so
hey.
I
think
this
will
be
the
first
code
of
conduct
committee
community
briefing,
thanks
for
having
me
I'll
kind
of
do
a
little
bit
of
an
intro
to
us
and
then
go
into
what
we've
done
and
what
we've
been
we're
planning
on
doing.
To
start
with,
there
are
five
members,
and
we
just
recently
had
elections
tasha-
and
I
have
been
here
now
for
a
year
and
then
celeste
karen
and
tim
just
joined
so
give
them
a
welcome.
B
And
one
of
the
the
feedbacks
we've
gotten
a
bunch
of
is
folks:
don't
really
know
what
it
is
we
do
or
that
we're
here
sometimes,
and
we
put
some
work
into
that
last
cycle
and
we're
going
to
be
putting
more
work
into
that.
This
cycle
wanted
to
start
with
a
bit
of
a
the
who.
What
and
why
we
are
distinct
from
the
steering
committee
and
we're
distinct
from
contributions
both
of
those
we
do
work
with.
B
And
yet
our
our
focus
is
much
more
on
providing
support
and
helping
to
set
the
culture
having
to
ensure
the
culture
is
what
we
all
want
it
to
be
and
support
people
when
there
are
conflicts
or
when
folks
aren't
really
sure
what
to
do
in
a
situation
where
we're
here
as
a
resource
to
help
out,
and
so
we
focus
more
on
the
health
and
behavior
of
the
community
as
a
whole
and
more
info
on
that
can
be
found
in
some
of
our
github
repos
I'll,
get
more
into
what
we're
changing
there
soon
and
a
whole
lot
more
info
in
the
last
kubecon
talk
the
readout
from
tasha
in
the
last
con,
and
we
have
one
coming
up
in
the
next
kubecon
again.
B
B
That's
all
it
is,
doesn't
have
to
be
a
big
thing,
we're
also
here
just
to
provide
education
or
advocacy
or
support,
and
and
sometimes
that
can
take
the
form
of
just
networking
people
or
talking
with
them
or
even
reaching
out
and
getting
external
support
like
mentorship
and
training
resources
for
people.
We
haven't
had
very
many
requests
for
that,
but
it
has
come
up.
B
We
have
a
mailing
list
and
we
are
all
on
slack.
Those
are
the
best
ways
to
reach
us
and,
like
I
said,
one
of
the
things
we've
had
a
lot
of
questions
of
is
what
happens
if
you
do
reach
us,
and
all
reports
are
confidential
who's
reached
out
to
us
as
confidential.
B
B
B
What
happened
last
cycle,
like
I
said
elections,
I
think
kyle
and
jason
jennifer
for
all
the
work
they
did
with
us
last
last
year
and
I'm
super
excited
to
be
working
with
celeste
and
karen
and
tim
now
we're
working
on
a
model
with
the
lf
events
team.
B
We
sort
of
tested
this
at
kubecon
san
diego,
where
we
could
provide
support
for
their
event
staff
and
they
could
better
connect
with
us
so
that
it
wasn't
just
lf
hosting
kubernetes
community
and
there
had
been
some
communication
challenges
there
between
the
sort
of
the
staff
and
the
community,
and
so
we're
trying
to
bridge
some
of
that
and
in
the
interest
of
transparency,
we're
also
working
towards
having
transparency
reports.
B
You
know
lf
does
sort
of
a
diversity
report
at
the
end
of
summits,
we're
working
to
figure
out
if
there
is
a
model
for
us
and
how
we
could
do
more
of
a
transparency
report
around
community
health
and
not
just
community
diversity,
while
also
respecting
the
need
for
privacy
and
then
and
and
so
to
that
end,
I
wanted
to
share
a
little
bit
of
our
action.
B
Last
year,
we've
been
called
in
during
some
conflict
resolution
inside
some
cigs
in
between
since
eggs,
not
going
to
say
who,
but
that's
the
stuff
that
we're
here
to
support,
and
then
some
things
happened.
You
know
people
said
the
wrong
thing
on
stage
or
here
and
there
and
we've
had
to
offer
some
coaching
and
help
provide
guidance
to
folks
that
you
know
hadn't
adapted
to
the
culture
or
were
coming
from
different
cultures
or
different
countries
where
the
norms
are
different
and
and
so
again
that
happens.
We
are
a
large
global,
incredibly
diverse
community.
B
It's
it's
normal
for
there
to
be
a
little
bit
of
rough
edges
and
we're
here
to
help
smooth
that
out.
So
our
plans
for
the
upcoming
cycle
there's
a
lot
of
plans
for
documentation,
improvement,
making
it
clear
what
our
internal
sla
is.
We
have
one
as
a
team
how
fast
we
will
respond.
B
If
someone
does
reach
out
to
us,
we
haven't
had
enough
activity
to
really
need
that,
but
we
try
what
are
report
receiving
what
our
triage
procedures
are,
and
this
is
to
address
the
the
questions
that
we've
received
with
like
what
happens.
If
I,
if
I
do,
need
you
and
we
want
to
make
that
clear
and
transparent
what
the
process
will
be
so
that
folks
can
understand
you
know
all
things
have
been
saying
that
we're
just
here
to
support.
B
We
want
folks
to
feel
comfortable
reaching
out
if
they
have
a
concern
that
we
will
handle
it
fully
confidentially-
and
you
know
we're
kind
of
a
young
committee,
so
the
onboarding
and
off-boarding
process
itself
could
use
some
better
documentation,
so
we're
gonna
work
on
that
and
we're
also
working
on
on
documenting
sort
of
the
the
specific
process
or
the
expectations
as
we
work
with
other
groups
steering
controvex
the
events,
team
limits
foundation
and
their
social
media
team
right
kubernetes
doesn't
just
exist
at
events.
We
also
have
slack.
B
We
also
have
twitter,
that's
a
bounded
space,
but
it's
a
very
unbounded
kind
of
wild
space
and
lf.
Linux
runs
some
of
that
stuff
and
we
do
work
with
the
github
and
slack
admins.
If
things
happen
mostly,
what
I
see
on
slack
is
like
spam,
stuff,
whatever
and
and
how
this
affects
anybody
in
the
community
right.
We
just
want.
You
all
know
that
we're
here
to
help
you
probably
won't
see
our
work.
That
is
just
the
nature
of
our
work
in
that
we
have
to
respect
and
we
really
want
to
respect
everyone's
privacy.
B
B
Question
is
some
cncf
projects
need
guidance
on
establishing
code
of
conduct
committees.
Can
they
reach
out
to
us
to
help
absolutely?
Yes,
I
just
recently
found
out
that
there's
some
work
in
the
cncf
around
that
and
we'd
love
to
be
part
of
that.
A
Next
up,
that
is
the
end
of
the
sig
updates.
Thank
you
all
for
the
wonderful
updates
and
looking
forward
to
the
future
for
sure
and
all
the
hard
work
you've
done
in
the
past
release
and
in
previous
releases
we
have
a
few
announcements.
The
kubernetes
steering
committee
elections
are
open.
If
you
are
an
eligible
voter,
you
should
have
gotten
a
ballot
in
your
inbox.
A
I
have
already
cast
my
vote.
If
you
are
supposed
to
be
a
voter
but
did
not
get
your
ballot,
there
is
a
request
for
a
replacement
linked
right
here
in
the
agenda
that
you
may
follow
to
request
that
next
update.
Also
it's
very
important.
We
have
some
steering
folks
on
here.
We
have
some
folks
running
so
very
important
to
vote
for
the
future
of
our
steering
committee.
So
take
it
seriously.
A
Thank
you
so
much
and
we
have
a
passcode
for
zoom
meetings
now
that
passcode
is
I'm
not
sure,
if
I'm
supposed
to
say
it
out
loud,
but
there's
a
passcode
for
all
the
zoom
meetings
check
the
the
the
mailing
list
check
your
email
check
the
agenda.
There's
a
passcode
listed
there.
So
please
use
that
for
all
the
future
meetings
that
you
can't
get
into
with
that
I'd
like
to
announce
our
next
host
pk
on
github
and
slack
is
going
to
be
hosting.
A
They
are
a
very,
very
fond
lover
of
home,
lab,
probably
more
than
me,
and
I
love
my
home
lab.
So
we
look
forward
to
having
pk
host
next
month.
If
you
would
like
to
host
yourself,
please
reach
out
to
george
castro
on
slack
or
sid
controvex
they'll.
Let
literally
anybody
do
it.
They.
Let
me
on
here
so
feel
free
to
reach
out.
It's
super
easy
and
you
just
do
a
bunch
of
cat
hurting.
So
it's
always
great
to
hear
from
different
folks
in
the
community.
A
We
have
a
bunch
of
shout
outs
that
I
like
to
call
people
to.
You
can
check
all
the
ones
in
the
shout
out
slack
channel
if
someone's
done
something
awesome
the
community
and
you
want
to
give
them
a
shout
out
pop
in
that
channel
drop
a
tag
in
here,
I'm
not
going
to
read
through
them
all,
because
we
have
a
great
list,
but
you
can
check
the
agenda
or
that
slack
channel
with
that.
We
have
a
tiny
bit
of
time
left.
A
Are
there
any
open
announcements
people
like
to
share
or
open
updates
for
discussion.
A
I
will
start
reading
the
shout
outs.
Thank
you,
paris.
Let's
see,
I'm
terrible
pronouncing
names.
The
first
shout
out
is
from
at
s
r,
a
g
h,
u
n
a
t,
h
a
n
shout
out
to
at
karen
b
for
her
support
with
119
kubernetes
website
release
process
so
for
she
is
dependable
and
goes
an
extra
mile
to
help
and
constantly
on
the
lookout
for
improvement.
Thank
you
for
the
help.
Karen
b
now
liggett
would
like
to
shout
out
knight
42
for
tireless
work
on
tracking
down
and
fixing
test
flakes
and
real
bugs.
A
aoje,
a
big
shout
out
to
to
liggett
for
not
only
doing
his
work,
but
for
sharing
knowledge
and
expertise.
If
you
have
not
seen
this,
jordan
did
a
great
video
on
how
to
track
down
flakes,
he
shared
a
gist
of
it.
I
believe
it's
linked
in
the
announcements
in
the
sick
testing
channel
but
go
read
the
gist
and
watch
the
the
video
it's
really
great
and
we
can
start
tracking
down
some
of
those
flakes.
A
A
Lori
apple
would
like
to
shout
out
xmudri
marcos
for
driving
efforts
to
firm
up
the
roles
and
responsibilities
for
our
release
manager
associates.
A
A
Sf
tim
would
like
to
shout
out
zachary
sarah
for
a
wealth
of
positive,
valuable
contributions
to
sig
docs
e
hashman
would
like
to
shout
out
jay
vance
for
this
excellent.
First
pr,
I
e
hashman
helped
backport
the
fix
to
117
119
cross-checking,
this
against
the
production
clusters
that
they
support.
I
discovered
this
bug
is
producing
a
full
two
percent
of
our
production
logs,
so
the
fix
saved
on
a
lot
of
wasted
production
log
space
d
holbach
would
like
to
thank
somo
tachiyama.
A
They
have
done
great
work
in
their
google
summer
of
code
internship
this
year
check
out
the
write
up
in
the
blog
post
on
that
sam
tochiyama
would
like
to
shout
out
their
awesome
mentors.
A
Through
the
google
summer
of
code
internship,
justin
b,
stealthy
box
and
the
hashtag
cluster
add-ons
channel
community,
they
were
really
amazing.
A
Those
are
all
the
shout
outs,
I'm
sorry
if
I
butchered
names
but
keep
them
coming
for
the
next
meeting,
shout
out
your
fellow
community
members
and
thanks
everyone
for
joining
us
for
this
update,
there's
nothing,
I'm
forgetting,
and
if
there's
nothing
else,
I
think
we
can
call
it
and
sign
off
and
have
a
extra
eight
minutes
back.
D
I'm
sorry
I
joined
late
first
of
all
sure
so
hi
everyone,
I'm
kind
of
like.
Can
you
know
that
I
did
some
contribution
to
kubernetes
and
I'm
back
after
a
long
pause,
and
I
I
don't
know
it's
the
correct
place
to
to
make
my
contribution
now.
I
found
it
like
kind
of
like
a
small
usability
issue
and
then
before
I
open
a
bug
and
have
like
thousands
of
discussion,
I
thought
it
might
be
easier
to
just
maybe
find
a
sponsor
for
that.
D
So
I
know
is
this
the
right
place
to
look
for
a
sponsor
or
for
the
pr.