►
From YouTube: Kubernetes Community Meeting 20190124
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 6pm UTC.
See https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more information!
A
All
right
welcome
everybody
to
the
weekly
kubernetes
community
meeting.
The
V
date
is
January.
24Th
2019
I'll
be
your
host.
Today
my
name
is
Jorge
Castro
I
work
as
part
of
sig
contributor
experience,
more
VMware,
so
welcome
everybody.
We've
got
a
packed
agenda.
Please
do
remember
that
this
meeting
is
being
streamed,
live
to
YouTube
and
recorded
for
the
public.
So
please
remember
everything
you
say
will
be
recorded
for
the
public
record.
I've
asked
Bob
to
whack
a
link
to
the
agenda
into
the
sidebar
and
zoom
there.
A
If
you
haven't,
if
you
want
to
follow
along
with
the
notes,
we
are
still
looking
for
a
note-taker,
so
you'll
see
the
little
placeholder
there
in
the
notes.
So
if
you
want
to
help
us
out,
please
feel
free
to
just
dive
into
the
notes
and
get
started,
and
with
that
we
are
going
to
have
a
quick
demo
which
is
Tracie.
Potts
start
up
in
kubernetes
with
David
Asheville,
then
Aaron,
Creek
and
Berger
is
gonna.
B
My
name
is
David
Africa
I
work
at
Google
and
I
attend
signo,
although
this
will
sound
more
sig
instrumentation
II
than
no
D.
First
I
want
to
give
a
big
shout
out
to
my
intern
Sam,
who
actually
did
all
of
the
work
here
and
whose
code
I
will
be
running
for
you
today.
He
is
at
monkey
inator
on
github
and
he
has
a
cap
out
for
review,
but
he
his
internship
is
done,
so
he
can't
be
here
to
present.
So
that's
why
I'm
here
so
I'll
get
started.
B
Let's
see
it's
really
hard
in
kubernetes
today
to
gauge
whether
kubernetes
is
actually
doing
what
we
want
it
to
do
in
the
time
that
we
want
it
to
do
it
to
do
it
in
sometimes
kubernetes
gets
stuck
or
is
slow
and
it's
hard
to
figure
out
where
in
the
process
it's
stuck
or
which
component
is
being
slow,
and
if
you
were
a
cube
con
or
if,
like
me,
you
watch
lots
of
cute
con
videos.
Tracing
is
a
big
thing
in
micro
services
and
I.
Think
it's
quite
applicable
to
to
kubernetes.
B
If
we
can
get
it
to
work
and
just
to
go
over
kind
of
the
current
set
of
tools
that
we
have
events,
sort
of
fill
this
gap
where,
if
something
goes
badly
wrong,
you'll
get
an
event,
but
not
everything
has
an
event,
and
it
doesn't
necessarily
tell
you
we're
in
the
process.
Something
is
unless
you
really
know.
B
Latency
metrics
are
useful
for
debugging
latency
problems,
but
only
if
you've
pre-configured
them
ahead
of
time
and
you
can't
always
figure
out
which
latency
metric
is
associated
with
which
pod,
for
example,
because
of
cardinality
problems,
let's
see
so
that
introduces
distributed
tracing
distributed
tracing
I
like
to
think
of
as
structured
context,
aware,
latency
logging.
So
because
it's
sort
of
like
vlogging,
you
don't
have
any
cardinality
issues.
Latency
is
a
first
class
concept
in
tracing,
so
it's
really
built
to
solve
latency
problems.
B
Basically,
let's
see
and
we
decided
to
use
open
census
for
a
number
of
reasons,
it's
one
of
a
few
vendor
agnostic
tracing
libraries
out
there,
and
it
has
some
features
that
we
found
were
very
helpful
and
I
think
fit
well
with
what
we
were
trying
to
do
in
kubernetes.
But
I'll
go
into
that
later.
For
now,
you
can
think
of
it
as
instrument
once
and
then
you
can
push
to
Zipkin
or
any
of
the
other
tracing
backends
that
exist
for
for
kubernetes,
so
as
quickly
as
L
can't
I
can
I'll
get
into
a
demo.
B
So
I'm
just
running
two
nodes
here,
pretty
simple
I'm
running
a
couple
pods.
So
first
of
all,
I'm
running
this
Zipkin
server,
which
is
pretty
standard
and
I've
pulled
off
the
internet
somewhere
I
have
a
service
in
front
of
it,
so
that
I
can
where's.
My
I
have
a
service
in
front
of
it
so
that
I
can
hit
it
here
and
I'm
running
the
open
census
agent
as
a
daemon
set,
and
the
thing
I
want
to
point
you
to
alright.
This
isn't.
B
So
that's
what
the
agent
gets
us
is
that
there's
no
stack
driver
or
AWS
code
living
in
the
cubelet
or
anywhere
else
in
kubernetes.
It's
all
open
source
and
we
let
the
agent
handle
the
configuration
and
pushing
metrics
to
different
sources.
So
we
can
decouple
that,
let's
see
I'll
get
back
to
the
demo.
So
I'm
running
you,
zip,
conserver,
I'm
running
this
agent
that
pushes
to
the
Zipkin
server
and
then
I'm
also
running
this
context,
injector,
which
adds
trace
context
as
an
annotation.
So
we
can
do
it
out
of
tree.
B
B
We
can
see
that
we've
added
this
annotation
here,
which
just
has
a
an
encoded
context
on
that
way.
All
the
components
that
look
at
this
pod
and
do
things
on
behalf
of
this
pod
essentially
know
how
to
reflect
it
back
that
it
came
from
that
pod
ready.
So,
let's
see
if
we
have
anything
in
sub
kin
cool.
So
we
have
this,
which
happened
just
a
couple
seconds
ago,
and
this
is
what
we
end
up
with.
Is
we
have
the
top-level
span
which
is
for
creating
the
pod?
B
Then
we
have
the
scheduler
work,
which
is
pretty
simple
because
there's
just
one
node
and
then
we
have
all
the
stuff
that
the
cubelets
doing,
and
so,
if
there
were
something
that
went
wrong
in
here,
it
would
be
really
easy
to
tell
what
part
in
the
process
it
went
wrong
in.
We
use
this
to
debug
a
bug
that
we
had
in
gke,
where
the
start
container
call
was
taking
a
really
long
time
and
it
turned
out
to
be
a
container
runtime
issue.
B
But
this
is
really
useful
for
debugging
those
sorts
of
problems,
let's
see,
and
then
the
last
part
of
my
presentation,
I
have
is
just
some
of
the
oops,
the
future
things
that
we
can
do
with
this.
We
can
because
it's
all
context
based
and
contexts,
are
sort
of
made
to
be
propagate.
We
can
propagate
this
into
plugins,
like
the
device
plug-in
to
analyze
the
processes
of
those
we
can
propagate
it
down
into
the
container
at
runtime
and
get
traces
from
the
container
runtime.
B
We
can
push
it
through
the
downward
api
so
that,
in
it,
containers,
for
example,
can
tell
us
how
they're
progressing-
and
there
are
just
moodle's
and
oodles
of
way
that
we
that
we
can
do
that.
The
other
things
that
we
hope
to
do
in
the
future
are
to
use
this
for
other
objects
other
than
pods,
but
that's
still
we're
still
sort
of
thinking
that
through
and
then
we're
definitely
hoping
to
add
tracing
for
other
parts
of
pod
life
cycle
such
as
updates
or
deletions,
and
that
should
actually
be
very
straightforward.
B
We
just
haven't
done
it
on
the
other
cool
thing
that
we
can
do
is
that
anything
else
that's
context
based
such
as
context
to
we're
logging.
We
could
then
have
links
between
them.
So
if
you
have
a
span
and
you're
wondering
well
what
happened
during
that
that
period
of
time,
you
can
actually
go
to
the
logs
for
that
component
and
figure
out
exactly
what
it
was
logging.
So
there's
a
lot
of
cool
future
applications
for
this
and
we're
just
getting
started
cool.
Thank
you.
I'll.
Take
questions.
A
When
we're
done
with
this,
if
you
could
drop
a
link
to
the
cup
in
the
chat
yep,
that
would
be
useful
for
the
notes
for
just
any
other
questions.
I.
B
B
A
C
C
I,
don't
know
so,
let's
talk
about
where
we're
at
in
their
Leafs
lifecycle,
I'm,
actually
I
think
and
get
a
ramp.
Let
you
know
you
know
what
let's
share
my
screen,
so
you
can
follow
along
with
me.
It's
gonna
be
this
desktop.
It's
gonna,
look
really
tiny
at
first
so
you're
the
community
meeting
notes,
as
always,
I
use
them
as
my
notes.
So
we
are
now
at
week
three
for
the
1:14
release.
That
link
takes
you
to
this.
It's
the
release,
schedule
which
tells
you
what's
up.
C
We
are
in
week
3
and
we
are
coming
up
on
week.
4.
The
next
upcoming
milestone
is
enhancements.
Freeze,
what's
enhancements,
freeze,
you
ask
you
click
the
link.
You
see
that
enhance
turns
tree
says
that
all
enhancements
must
have
an
Associated
issue
in
the
enhancements
repo
by
Tuesday,
January
29th
and
she
must
be
in
the
114
milestone.
C
C
Basically,
you
know
we
have
gone
around
to
all
of
the
different
SIG's
asking
them
what
work
they
want
to
do
for
this
release
and
we
have
asked
them
if
they
could.
Please
describe
that
work
in
the
form
of
a
cap,
the
cap
being
a
single
document
that
will
hold
useful
information
like
what
is
your
test
plan
for
this
enhancement
and
what
is
your
upgrade
downgrade
plan
for
this
enhancement?
Things
of
that
nature,
so
we
have
a
PR
out
to
discuss
the
cap.
Template
I
feel,
like
I,
see
a
hand
up
from
somebody.
C
C
Yes,
I
am
I,
have
a
bad
cattitude,
but
I
kind
of
feel
like
we
can
collectively
agree
as
humans
to
try
and
move
this
process
along.
So
what
we're
asking
for
is
get
a
PR
open
with
a
cap,
preferably,
could
you
get
that
PR
merged,
preferably?
Could
we
say
that
cap
is
in
an
implementable
state
which
basically
is
like
yeah?
Everything
looks
good
you
that
you're
pretty
much
ready
to
go?
C
C
As
always,
you
can
take
a
look
at
this
milestone
here,
which
shows
every
enhancement
issue.
That's
currently
slated
for
the
114
release,
our
wonderful
enhancements,
lead,
Claire
Lawrence
is
tracking
all
of
this
in
a
spreadsheet,
as
our
enhancements
leads
our
want
to
do
and
I
think
the
stats
that
she
gave
me
are
that
at
the
moment
we
have
roughly
41
enhancements.
19
of
them
are
in
an
alpha
state.
13
of
them
are
in
a
beta
state.
C
Six
of
them
earn
a
stable
State,
but
first
no
preference
it'd
be
great
to
see
more
of
those
on
the
more
stable
end
of
the
spectrum
and
less
of
them
on
the
Alpha
end
of
the
spectrum.
But
that's
just
me
as
a
human
being
of
these
41
enhancement
issues.
19
of
them
don't
have
caps
right
now,
so
you'll
probably
be
hearing
from
me
and
Claire,
and
some
of
the
enhancement
shadows
that
she
has
working
with
her
about
getting
that
taken
care
of
okay
going
up
to
what
we
have
done
so
far
a
little
bit.
C
Our
release
team
has
shadows
now
we're
waiting
on
PRS
to
to
clarify
the
CI
signal,
shadows
and
the
release
branch
manager
shadows.
But
thank
you,
everybody
who
signed
up
to
participate
and
looking
forward
to
working
with
all
of
you.
We
also
have
a
release,
notes
draft
out.
That's
this
link
here
and
so,
if
you're
working
on
any
of
these
particular
from
any
of
these
particular
sakes-
and
this
looks
a
little
weird
to
you-
come
talk
to
our
release.
Nets
lead,
let's
see
from
a
CI
signal
perspective.
C
Ci
signal
also
generally
maintains
something
in
a
couple
releases.
Past
CI
signal
has
been
maintained
by
using
a
Google
Doc.
This
may
not
be
what
we
use
going
forward.
We're
still
kind
of
figuring
it
out,
but
here
is
the
CI
signal
report
from
my
CI
signal
lead
this
week,
but
the
TLDR
is
that
we
as
a
release
team
watch
the
release
master
blocking
dashboard.
C
That's
what
I
just
clicked
the
link
to
by
the
way
did
you
all
notice
test
grades,
like
summary
page
like
now,
has
these
really
awesome
like
pecans
here
that
are
like
big
and
bold,
so
you
can
really
see
what's
green
and
really
see,
what's
red
and,
oh
god,
there's
so
much
purple
for
a
flaky
but
hey
like
it's
a
little
bit
more
obvious.
So
this
is
the
dashboard
we're
paying
attention
to.
If
this
is
broken,
expect
people
to
come.
Ask
you
why
it's
broken
for
your
particular
test.
C
Personally,
don't
know
of
anything,
something
else
did
pop
into
mind
that
that
lovely
release
master
blocking
dashboard
I
showed
you
one
of
the
criteria
that
the
CI
signal
myself
and
some
of
my
shadows
are
working
on.
We
want
to
make
sure
that
every
job
on
that
board
is
owned
by
somebody
and
by
owned
I
mean
it
has
a
test
grid
alert
set
up
for
those
of
you
who
don't
know
tester
can
email
or
send
an
email
to
one
of
many
one
or
multiple
email
addresses.
C
There
are
things
out
there
that
do
this
today
for
some
of
their
individual
dashboard.
Shoutouts
to
sig
storage
and
sig
networking
instead
question
lifecycle
for
doing
that.
This
is
something
you
can
absolutely
do
on
your
own,
but
you're
gonna
be
voluntold
by
sake
release
and
if
we
can't
seem
to
find
the
appropriate
owners
we're
going
to
suggest,
maybe
those
jobs
don't
quite
belong
on
release
master
blocking
until
somebody
willing
to
say
they
should
be
blocking
steps
up
to
make
them.
So
look
for
info
about
that
later.
Okay,
now
I'm
done.
Okay,.
A
D
Okay,
so
since
my
last
update
and
that's
already
I-
think
a
quarter
passed
by
so
I
just
opened
my
all
the
slides
and
look
at
how
much
progress
I
feel
so
proud
of
our
community.
So
this
again
for
the
just
general
of
the
sig
administration
updates.
So
we
still
hold
off
the
week,
may
signal
the
meeting,
regulate,
ID,
every
Tuesday,
10:00
a.m.
D
and
then
the
agenda
and
also
notes
it
is
linked
there
and
also,
although
we
record
all
the
meeting
and
push
the
other
record
to
the
YouTube,
and
if
you
are
interesting
topic
and
just
go
find
those
and
then
we
also
try
to
New
Year.
We
try
to
also
every
meeting
and
we
decided
action
item
and
the
samurai
Otto.
What
we
discuss
and
and
if
there's
the
pending
issue,
then
we
summarize
the
action
item
and
what's
the
next
time,
so
that's
the
new
protocol.
We
try
to
drive
in
for
the
signaled
meeting.
D
So
hopefully
we
can
make
a
more
vertical
improvement
on
the
decision
making
process
the
additional
on
the
signal.
Then
we
also
have
the
by
Whitney
or
by
or
maybe
monthly,
depending
on
the
topic.
We
have
the
resource
management
working
group
and
is
under
every
Wednesday
11:00
a.m.
Pacific
time,
so
so
next
last
time
is
just
sorry.
D
So
last
time
I
proposed,
we
update
our
six
scope
and
the
so
we
continue
evolve.
Our
sig
know
the
scope.
This
is
need
his
worship,
so
obviously
in
a
Cuban,
eight
and
the
pod
apiary
needs
the
pod
and
the
PATA
lifecycle
container
lifecycle.
We
need
his
staff,
no,
the
API
and
the
together
with
the
sick
architecture,
a
lot
of
the
API
NATO,
for
example,
for
the
debug
container.
D
If
yeah
we
we
discussed
and
decided,
and
then
we
send
them
direct
them
to
the
sick
architecture
and
to
discussing
same
thing
for
some
neck,
the
stick:
storage,
API
CSI.
There
are
some
related
to
the
signal,
so
we
discuss
and
a
pro
and
a
singer
and
then
sent
them
to
redirect
rustic
architecture
for
the
cross
sake
and
the
review.
And
then
we
also
have
the
node
management
in
cross,
no,
the
controller
not
left
performance,
scalability
and
reliability
and
the
real
happiness
specially
and
the
last
in
q4.
D
We
have
a
lot
of
enhancements
and
another
common
detector.
So
we
improve
of
the
nose
in
Tampa
and
also
add
new
problem
detectors
and
detect
the
new
issues
and
then
there's
the
container
runtimes.
There
are
a
lot
of
progress
on
the
continent
antonyms
and
beyond
the
beyond
us
there
I
beiong
the
continuity
cryo
and
we
also
proposed
last
q4.
D
We
propose
the
continuity
shim
washing
to
api
to
the
continuity
community,
which
is
well
accepted
by
the
community,
so
which
is
currently
using
beta,
kata
and
also
even
actors
and
discussing
with
how
we
move
to
Windows
who
supported
to
the
container
these
ship
whooshing
to
API.
So
there
are
some
any
progress:
collaborate
with
the
Content
ID
and
OCI
community
others,
the
community
and
as
the
device
management
and
in
image
management
the
recent-
and
we
also
on
the
image
management.
D
We
recently
also
have
the
new
proposal
and
discussing
with
the
continuity
community
and
how
we
are
going
fist
into
management
remote
image.
There
are
many
progress
made
there
and
the
thanks
for
continuity,
community
and
the
collaboration.
All
those
kind
of
the
I
first
help
this
move
forward
on
the
node
and
then
there's
the
node
level
resource
management,
together
with
the
seek
scheduler
team.
D
So
there's
the
many
made
it
and
many
and
going
discussing
included
after
this
quarter
and
paid
user
and
also
the
the
topology
aware
of
the
schedule,
all
those
kind
of
things
and
going
and
made
a
lot
of
progress
and
it's
not
finalized.
Yet
so
there's
the
other
things,
issues
related
to
the
nodes
and
just
keep
things
working
and
the
monitoring
and
there's
the
also
new.
We
have
the
proposal
a
while
back
and
the
call
matrix
API
to
effect
off
the
proponent
is
monitoring
pipeline.
D
So
there's
the
engineer
working
on
those
kind
of
area
and
they
try
to
propose
and
the
soft
today's
with
about
the
ribbon.
It
here
and
a
debug
been
a
key
issue.
So
and
then
they
have
not
level
the
ISO
nation.
The
security
issue
and
also
is-
and
you
know
last
one-
it
is
the
host
OS
and
the
kernel
interaction.
So
so
all
those
kind
of
things-
and
it
is
in
the
signal,
those
goals.
So
let
me
click
update
about
accomplishment
in
the
queue
for
last
year.
D
Kill,
for
it
is
a
short
cutter
and
also
there
are
the
two
cube
accounts
and
but
I
open.
My
last
time
updates
and
what
is
the
plan
I
think
we
made
a
lot
of
a
steady
progress
on
every
prairie.
Oh
we
mentioned,
and
the
first
essenes
Wrentham
class
random
cars.
It
is
alpha
in
V,
112,
112
and
but
in
the
queue
for
actually,
we
made
another
progress
and
the
integration
missed
kata
and
the
G
racer
pseudo
continuity
and
cryo
those
so
that
the
arguments
in
the
container
she
envisioned
you
we
propose
to
the
community.
D
Actually
it
isn't
working
for
in
the
medium
Attica
progress
and
the
pools
are
the
performance
under
kata.
So
a
lot
of
the
community
benefited
by
those
kind
of
things,
and
then
we
we
also
have
the
officials
and
heartbeat
our
fairness
in
the
113
and
the
skillet
six
gala
to
pick
up
the
work
right
now
to
do
those
the
benchmark
and
just
say
the
significant
improvement
on
the
scalability
and
another
one.
It
is
the
window
support
and
we
have,
together
with
the
sick
windows.
We
try
to
get
windows,
note,
GA
and
please
notice.
D
This
is
Windows,
know
that
GA
and
it's
not
the
Windows
platform
as
additional
platform
supported
by
Cuban
ideas,
so
in
the
queue
for
there's
another
progress
made
on
the
testing
and
attach
the
to
test
grid
when
it
is
on
top
of
the
GCP
in
other
ways
and
I
drew
there's
a
lot
of
progress.
Automated
windows
during
the
test,
work
and
also
there
is
the
test
result
engineers
working
detail,
especially
on
the
node
or
Windows,
not
really
need
confirm
test,
and
we
look
into
those
things
and
a
debug.
D
So
we
know
existing
feature
in
the
beta
for
mountain.
So
we
plan
to
promote
it
to
that
year,
but
we
also
plan
to
deprecate
at
work
eventually
in
the
kind
of
plan
it
is
in
the
17
minis.
So
we
can
switch
to
the
runtime
path
which
we've
been
talking
about
the
for
while
and
we
also
focus
on
another
level
estimation
there
are
the
pits
and
the
username
in
space.
This
quota
effort
is
going
on
and
especially
on
the
process.
D
There
are
kids
and
we
want
to
have
some
like
they
have
the
concrete
design
and
a
proposal,
and
so
we
could
move
forward
so
again
for
the
q1,
Windows,
no
windows
and
notices
sake
groups.
The
work
want
to
coordinate
more
different
and
you
are
GA
windows.
Another
support
again
windows,
note
that
gia
is
not
like
the
windows
plentiful
and
then
we
also
start
another
round
of
the
discussing
design,
discuss
and
review
for
the
increase
of
how
the
resource
updates
it
is
being
for
a
while
and-
and
we
already
have
over
PA
and
the
motion.
D
But
it's
not
impressed
update,
so
we
we
see
the
increasing
demanding
for
those
features.
So
we
put
effort
on
that
one
in
his
product,
together
with
the
autoscanning
and
the
scheduler
sick
groups,
another
one.
It
is
justice
if
unity,
Vasko
scheduling,
which
is
also
called
topology,
a
while
scheduling
so
mostly
is
the
Intel
and
ad
and
also
communal
forces,
focus
on
those
things.
This
is
the
Resource
Management
Group
kind
of
focus
area,
and
we
also
want
to
continuously
improve
of
the
debate
community
at
another
level
and
there's
the
effort,
the
back
containers
we
want
to.
D
Hopefully,
this
time
since
we
solve
that
API
provenance,
the
major
blocker,
so
hopefully
this
time
we
can
make
progress
on
the
debug
container
and
we
can
have
the
Alpha
and
also
just
David,
just
give
the
demo
on
a
tracing.
So
that's
the
engineering
working
out
and
together
with
intent
in
house,
and
we
want
to
we
just
he
just
gave
the
diamond.
We
want
to
push
this
to
the
community
and
move
forward
and
the
last
thing
is
just
keep
everything
running.
So
we
have
the
issues
backs,
tested
fingers
and
there
we
put
effort
so
every
quarter.
D
C
Friendly
114
released
lead
here
on
a
huge
pages.
Note
I
think
somebody
had
asked
about
whether
or
not
they
needed
to
write
a
kept
for
huge
pages,
given
that
it's
been
around
for
basically
ever
and
there's
a
design
proposal
and
lots
of
Doc's
on
it.
Yes,
you
do
know,
I
haven't
seen
one
I'm.
Somebody
just
asked
about
this
in
the
signals
channel
hi
Michelle,
like
I
four
things
that
have
a
lot
of
prior
documentation.
I'm
like
why
this?
Why
not
that?
Why?
This
way,
why
not
that
way,
the
things
that
design
proposals
typically
cover
I'm?
C
D
I
understand
I
think
the
last
time
we
did
have
the
disk
our
team
is
discussing
at
the
signal
of
the
meeting
and
the
two
weeks
ago.
Yes,
sir,
we're
I'm
going
to
follow
this
the
engineer
on
this
topic,
so
they
make
sure
we
follow
the
progress
processor
and
the
sensor
for
to
join
that
signal,
and
did
you
explain
it
explicitly
share
the
progress
with
the
engineers
community?
Yes,
okay,.
A
E
That
right
there
all
right,
you
should
be
seeing
a
very
plain-looking
slide
deck,
so
this
is
david's,
I'm
co-lead
for
api
machinery
and
I'll.
Just
briefly,
take
you
to
the
update,
as
Don
mentioned
queue
for
short,
so
last
release
cycle.
We
delivered
CRD
webhook
conversion
for
alpha
and
113
if
you're
using
custom
resources.
Please
look
at
this.
Please
try
turning
it
on
using
it
to
be
able
to
give
us
good
feedback
is
hopefully
a
solution
that
will
let
95%
of
use
cases
work
if
it
doesn't
work
for
yours
be
sure
to
let
us
know.
E
We
also
introduced
dynamic,
typed,
informers
and
Lister's,
so
we
had
a
dynamic
client,
but
it
was
very
difficult
to
actually
build
a
controller
off
of
the
dynamic
client,
and
now
we
have
informers
and
Lister's,
like
every
other
client
that
gets
generated,
so
it
should
make
it
much
easier
to
develop
a
dynamic
controller,
looking
forward
to
what
we're
planning
for
this
release,
we're
continuing
our
work
on
extensibility
and
we
are
investigating
a
path
for
admission
web
hooks
to
reach
GA.
This
does
not
mean
we
are
planning
for
them
to
be
GA.
E
E
2017,
q4
and
they've
worked
fairly
well
with
her.
A
few
cases
we
know
are
missing.
The
server
side
apply
work
that
has
been
ongoing
for
several
months
now
we
are
planning
to
bring
in
as
alpha.
There
is
a
pull
open,
it's
had
it
kept
for
a
long
time
and
it
will
be
well
gated
off.
But
again,
if
it's
a
thing
you're
interested
in-
and
you
probably
are
once
we
ship
it,
it
would
be
good
for
people
to
try
it.
While
it's
an
alpha,
give
good
feedback.
E
E
We
are
also
going
to
deprecated
a
couple
features
that
have
been
on
the
chopping
block
for
a
while
right.
We
have
deprecated
swagger
JSON
file
that
I
don't
think
anyone
has
used
for
a
long
time.
This
is
not
the
normal
open,
API
open,
API
is
still
gonna
stay.
Open,
API
is
still
going
to
be
aggregated.
This
is
specifically
about
the
old
swagger
JSON
file.
E
We
are
also
going
to
remove
initializers
for
those
who
remember,
initializers
and
admission
webhooks
can
about
to
solve
similar
problems
domains,
and
we
think
that
the
admission
web
hooks
are
easier
to
conceptually
understand,
even
though
they
have
different
limitations.
So
we
are
going
to
remove
initializers,
not
clear
whether
that's
going
to
happen
in
this
release
or
next
we
are
still
working
through
that
path.
They
never
made
it
past
alpha,
which
makes
a
lot
easier
to
remove.
The
last
thing
that
we
are
looking
at
is
api
request
fairness.
E
This
has
come
up
a
fair
number
of
times
from
different
SIG's.
There
is
a
design
out
that
we
are
looking
at
not
for
implementation
this
quarter.
But
if
you
have
comments
on
the
design,
this
would
be
a
good
time
to
do
it
before
we
start
taking
it
through
more
thorough
review
and
starting
the
chem
process
for
it.
E
A
C
Couldn't
speak
to,
let
me
find
them
okay,
so
first
off
I,
don't
think
either
of
the
appropriate
people
are
here,
but
I
wanted
to
say,
announced
that
the
github
administration
team
great
greatly
gratefully
thanks,
Garrett
Rodriguez
for
his
service
on
the
team
and
is
welcoming
Nikita
I'm.
Sorry
I'm,
not
even
going
to
bother
pronouncing
your
last
name,
I'll,
get
it
right
someday,
but
welcome
Nikita
to
the
github
admin
team,
okay,
okay,
one
of
the
reasons
I'm
super
excited
about
this
for
what
it's
worth
the
github
admin
team.
C
As
a
team
of
six
people
who
have
owner
access
to
all
of
the
github
works
all
of
the
hundred
and
fifty,
maybe
it's
even
hundred
and
sixty
something
github
repos.
So
these
are
people
that
you
like
really
don't
want
to
accidentally
Fivefinger
a
button
and
who
are
really
detail-oriented
in
their
responses
and
I
feel
like
we're
a
little
bit
clustered
up
right
now.
Prior
to
this
change,
we
had
four
people
from
Google
working
on
this
team
and
four
people
from
the
Pacific
time
zone.
So
Nikita
bumps
that
off
she
doesn't
work
for
Google.
C
She
works
for
Lucy
and
she
works
in
India.
So
she's
not
on
Google
standard
I
mean
Pacific
Standard
Time,
so
we're
looking
to
kind
of
diversify
the
team
a
little
bit
so
I
wanted
to
thank
everybody
for
helping
this
process
along
super
excited.
Okay.
Next
little
announcement
is
that
you
may
have
you
remember:
I
talked
about
the
steering
committee
thinking
about
doing
a
public
meeting
last
week.
C
A
So
we
have
a
channel
on
site
called
hash
shoutouts.
If
you
see
someone
doing
something
above
and
beyond
the
call
of
duty
in
kubernetes
throughout
the
week,
just
mention
them
in
there
and
then
they
host
for
that
week's
meeting
will
aggregate
them
and
read
them
out
so
Aaron,
Crick
and
Berger
would
like
to
thank
them
the
other
for
fixing
yesterday's
crowd
outage,
even
though
he
wasn't
on
call
and
then
he
wrote
up
a
post-mortem
which
I've
dropped.
A
The
link
there
in
the
notes
at
édouard
wave
say
hello,
so
the
camera
sees
you
for
reaching
out
to
all
the
SIG's
and
his
continued
work
on
revamping
the
kubernetes
def
guide.
Just
a
quick
info
from
eduardito
intern
who's
going
to
be
working
on.
Revamping
our
developer
guide,
those
of
you
that
have
been
keeping
track.
We
currently
have
a
devil
directory
in
the
community
repo,
oh
that's
a
bunch
of
stuff,
but
we
have
a
text
a
long
time
and
édouard
is
going
to
be
going
around
and
trying
to
help
us
organize
that.
A
So
please
welcome
him.
Moving
on
shout
us
to
hippy
hacker,
Tim,
Hawking
and
Brendan
burns
for
transitioning.
The
first
piece
of
project
infrastructure
to
the
community,
so
this
is
DNS.
You
should
have
seen
this
announcement
on
the
devil
list
and
just
a
quick
editors
know
you
can
all
stop
emailing
Tim
Hawking
for
your
favorite
subdomains
on
the
kubernetes
domain,
shout
out
some
mr.
bobby
tables
for
putting
together
a
community
documentation
style
guide,
which
you
can
find
in
that
link
that
he's
popped
on
there.
A
That's
kind
of
helping
ensure
that,
when
we're
working
on
our
readme,
x'
and
stuff,
like
that
for
all
your
SIG's
and
all
the
documentation,
that's
in
that
reflow
to
kind
of
make
some
sense
out
of
it.
A
rumba,
alarcón,
I,
hope
I
got
that
right
as
hosting
facilities
in
a
Mexico
City
and
is
looking
start
a
meet-up
follow
the
link
if
you're
interested
in
helping
out
this
one's
interesting.
This
has
been
popular.
Henning
Jacobs
is
collecting
a
list
of
kubernetes
failure
stories
and
I
thought
that
was
really
interesting.
A
It's
kind
of
a
service
of
the
community
when
someone
talks
about
how
they
failed
and
he's
collecting
a
list
of
talks
from
cuke
on
and
around
the
world
that
are
outlining,
outlining
Feiler
story.
So
I
think
that's
really
useful.
Definitely
a
lot
of
good
talks
and
lessons
to
be
learned
at
that
link.
I
mean
just
a
reminder
that
you
can
give
a
demo
to
this
call
like
we
did
in
the
first
10
minutes
if
you're
interested
in
that
see
the
top
of
this
document
and
you're
also
invited
to
host
this
meeting
as
well.
A
We
have
a
rotating
set
of
hosts
and
are
always
looking
for
new
shiny
faces
to
help
so
ping,
myself
or
Parris.
If
you're
interested
in
that
and
lastly,
we
open
talk
proposals
in
slack
I
know
it's
passed
the
cube
con
CFP,
but
we
figured
it'd
be
good
to
have
a
channel.
That's
open
year-round
if
you're
looking
for
help
or
submission,
or
you
want
to
share
slides
with
someone
or
you
need
a
review
or
you
need
help
with
your.