►
From YouTube: Kubernetes Community Meeting 20190411
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
Hello
everybody-
this
is
the
kubernetes
community
meeting
today
is
April
11th
2019
welcome
everybody,
it's
great
to
have
you
thanks
for
being
here.
My
name
is
Lachlan
Evensen
I'm
a
p.m.
at
Microsoft,
and
this
is
my
first
time
hosting
the
kubernetes
community
community
meeting.
So
thank
you
for
having
me
I'm
really
excited
to
be
here,
as
you
can
tell
I'm,
even
putting
on
my
best
Australian
accent.
So,
thanks
for
being
here,
we've
got
a
wonderful
lineup.
A
A
I
get
a
little
bit
of
feedback
there.
There
we
go
I'm
going
to
put
the
agenda
in
the
chat
for
everybody
there.
We
have
a
packed
and
fantastic
agenda
this
morning
that
I'm
really
excited
to
take
everybody
through,
and
we've
already
had
a
volunteer
to
take
notes.
So
I'd
like
to
give
a
shout
out
to
Sally
Ross
over
Google
thanks
Sully
for
taking
notes,
we
really
appreciate
it.
So,
let's
get
into
this
packed
agenda.
C
C
Come
on
first,
we
get
to
whether
or
not
I
can
actually
type
just
so.
You
know
that
it's
live
okay,
so
late.
Yesterday,
I
built
up
a
cluster,
you
know
I
happen
to
be
working
on
Azure
and
so
I
used
a
tool
called
aks
engine,
but
what
I
did
was
I
deployed
the
kubernetes
14
cluster,
but
what's
different
about
this
cluster
is
not
only
does
it
have
nodes
running
running
Ubuntu
I
also
have
a
few
nodes
running
Windows
Server
in
it,
and
so
what's
great
about
this.
C
Is
that
that
means
that
I'll
be
able
to
you
know
within
the
single
cluster,
deploy
both
Windows
and
Linux
workloads,
and
since
you
know
this
is
all
using
the
same,
consistent
kubernetes
api.
I
can
actually
manage
all
of
this
either
from
from
Linux
OS,
X
or
Windows.
All
using
the
same,
you
know
familiar
tools
like
cube
control
and
one
of
my
favorites
helm.
C
So
first
off
I'll
just
show
a
really
quick
deployment
before
I.
Kick
this
off.
So,
there's
nothing
really
particularly
different
about
deploying
a
Windows
app
other
than
the
fact
that
the
container
imaged
and
deploying
it
happens
to
be
a
Windows
app
and
then
because
I've
got
multiple
OSS
in
my
cluster
I'm,
using
node
selectors
to
make
sure
that
those
that
the
workload
lands
on
the
appropriate
node
so
right
now,
I'm
using
node
selectors,
because
all
the
things
I'm
going
to
use
in
my
demo
have
them
set.
C
But
if
you
wanted
to
introduce
Windows
nodes
into
an
existing
cluster,
but
you
didn't
have
the
the
OS
set
for
your
Linux
workloads.
You
could
instead
choose
to
taint
the
Windows
nodes
and
then
nothing
will
get
scheduled
to
them.
And
then
you
just
add
the
Toleration
on
the
windows
workloads.
But
for
today,
I'm
just
I'm
going
to
be
using.
C
C
Ok,
now
it's
a
race
to
see,
okay,
that
one's
running
all
right.
So
now
that's
running
I'm
able
to
use
you
have
the
steam
standard
services
just
load.
Balancer
here
is
a
default
one
that
was
deployed
by
asher.
But
if
I
wanted
to
use
something
with
an
ingress
controller
using
nginx,
those
workloads
can
run
on
Windows
or
Linux
nodes.
It
doesn't
matter
because
the
cluster
IPS
used
for
all
the
services
are
still
routable
across
the
entire
kubernetes
cluster.
C
It
can
run
anywhere
and
then
so
going
to
that
public
IP
I've
got
the
web
server
up
and
running.
You
know
if
you
want
try
hitting
it
feel
free,
but
that's
just
a
quick
sample
of
what
this
looks.
Like
you
know
from
someone.
That's
you
administering
stuff
on
a
Linux
machine
but
deploying
Windows
workloads,
analytics
workloads
and
also
before
I.
Get
off
of
this.
B
C
Quickly
show
the
docker
file
here,
so
you
know
the
stalker
files,
of
course,
using
the
microsoft.net
framework
base
image,
that's
published
there
and
then
I'm,
just
configuring
an
application
pool
and
then
adding
in
the
things
that
were
built.
So
when
I
built
this
code,
I
built
it,
you
know
within
Microsoft,
Visual
Studio
the
same
way
as
if
I
was
going
to
build
and
test
it
locally,
but
I'm
just
packaging
up
the
and
binaries
in
a
docker
container,
then
to
actually
deploy
this
I'll
go
ahead
and
deploy
it
from
helm.
C
If
I
were
to
run
this
in
production,
I
would
go
ahead
and
use
a
service
catalog
to
actually
deploy
my
sequel
server.
But
since
I'm
going
to
run
it
on
this
little
dev
cluster
that
I
set
up
I've
got
it
set
up
to
actually
go
ahead
and
run
sequel
server
along
the
Linux
nodes.
So
that's
going
to
register
a
service
and
then
it's
going
to
run
the
Windows
web
application
over
on
one
of
the
windows
nodes.
C
Alright,
so
now
I
have
the
two
pods
running
they're,
of
course,
gonna
be
on
different
nodes
and
I'm
still
waiting
on
an
external
IP.
So
let's
just
give
that
a
second
here,
okay,
so
now
I
have
that
just
hop
on
over
to
a
web
browser,
so
the
first
time
I
load
this
since
I
did
a
development
deployment.
It's
going
to
take,
take
just
a
little
bit
of
time.
C
A
B
C
All
right
so
anyway,
I'm
opening
up
trouble
tickets
got
my
app
app
working.
You
know
just
a
way:
I
was
expected
to
with
the
Windows
and
Linux
stuff
all
running
side
by
side
and
I'm
able
to
use
the
same
management
tools,
whether
I'm
on
Windows
or
Linux,
or
Mac
OS,
to
deploy
applications
on
both
windows
and
legs
all
in
the
same
cluster.
So
so
that's
that's!
That's
it.
A
A
Notes
for
today
appreciate
it
very
much
appreciated,
so
all
the
links
that
Patrick
just
showed
to
build
that
demo.
Actually
in
the
agenda
there,
if
you
are
interested
in
building
your
own
version
of
that
demo,
are
taking
a
look
and
obviously
windows
went
GA
windows,
nodes
went
GA
and
1.14.
Is
that
correct
Patrick?
Oh?
A
D
All
right
so
giving
an
update
on
115,
so
the
115
release
cycle
officially
began
on
Monday.
We
had
our
first
meeting
and
things
that
have
been
happening.
Most
of
this
week
is
we've
been
working
on
getting
the
schedule
finalized
and
it
is
available
in
the
github
repo
for
the
release
on
the
readme,
which
I'll
drop
a
link
to
in
the
chat
and
a
possible
update
if
I'd
mentioned
it
on
last
week's
call.
But
this
is
going
to
be
an
11
week.
Cycle
I
had
originally
thought.
D
D
Some
additional
upcoming
milestones
that
might
be
interesting
for
folks.
We're
gonna
start
enhancements
tracking
next
week.
So
Kendrick,
who
is
the
enhancements
lead,
should
be
starting
to
go
through
the
github
issues
for
any
items
that
want
to
be
included
in
the
115
milestone.
If
you're
sig
has
any
items
that
they're
hoping
to
get
on
the
115
milestones
for
enhancements
as
a
friendly
reminder,
you
must
have
a
cup
in
an
implementable
state
by
enhancements.
Freeze
and
enhancements.
Freeze
is
on
April
30th.
D
You
must
also
have
an
open
issue
in
the
115
milestone
for
that
and
for
the
cups,
another
friendly
reminder.
We
would
really
like
to
have
testing
plans
and
graduation
criteria
in
them.
Having
these
testing
plans
in
graduation
criteria
is
really
helpful
in
terms
of
evaluating
how
ready
each
enhancement
is
for
when
it's
going
to
be
released.
D
A
No,
it
doesn't
look
like
fantastic.
Thank
you,
so
much
Claire
for
the
update
very
much
appreciated.
There
is
also
some
patch
release
updates
in
there.
I
think
the
first
1.14
patch
went
out
earlier
this
week
and
a
few
others
are
built
plans.
So
please
take
a
look
at
the
agenda
for
more
there.
Now
we
go
over
to
the
contributor
tip
of
the
week
and
we
have
George
who's
gonna
be
doing
that
so
I
will
hand
it
over
to
George.
You
have
the
floor
judge.
E
That's
okay,
struggling
with
with
this
share.
Okay,
can
everyone
see
this?
So
it's
gonna
be
a
real,
quick
one.
If
you
go
to
CSKA.
Oh,
this
is
called
pound.
This
is
Tim's
thing
and
I'm
sure
others
have
been
working
on
it,
but
it
says
for
code
search,
Keith
said
I.
Oh
I
was
wondering
what
am
I
responsible
for
in
owners
files,
cuz
I
have
no
idea,
I
mean
sometimes
I
get
things
assigned
to
me
in
github
that
I
have
no
idea
how
I
ended
up
on
there.
So
I
decided.
E
You
know
if
you're
in
an
owner's
file-
and
you
don't
think
you
should
be-
you
can
go
ahead
and
reach
out
to
the
other
people
who
are
reviewers
and
approvers
there
to
update
that.
So
that
is
your
tip
of
the
week
as
soon
as
I
figure
out
how
to
stop
sharing
okay
there
we
go
any
questions
on
CS
that
case
I.
Oh.
A
A
A
Yes,
it's
actually
Jeffery
who's
sitting
in
a
room.
You
are
the
cap
of
the
week
Jeffery,
so
this
cap,
so
I'll
just
take-
take
folks
through
just
caps
at
a
high
level
quickly,
but
under
kubernetes
enhancements
is
the
repository
you're
looking
for
you
will
actually
find
a
whole
bunch
of
issues
and
pull
requests
relating
to
caps.
So
here
we
have
a
push,
pull
request
for
release,
notes,
improvements
and
I'll
quickly.
Just
take
you
through
what
we
have
here,
so
it
was
suggested
by
Jeffery.
So
here
we
actually
have
the
PR,
which
contains
the
cap.
A
If
I
move
my
little
window,
we
can
actually
just
look
at
what
this
looks.
Take
a
look
at
this,
but
this
cap
specifically
goes
about
defining
a
new
site
for
end
users
to
better
consume
the
generated
data
with
regard
to
release
notes,
so
I
would
urge
everybody
who
loves
taking
a
look
at
the
release,
notes
and
the
release
team
does
a
fantastic
job
of
creating
these
there's
jeffrey
is
actually
proposing
changes
to
this
process
and
you
can
come
in
and
take
a
look
at
what
they
might
be
and
feel
free
to
comment.
F
A
Fantastic
Thanks
and
thank
you
for
the
wonderful
kit
all
right,
so
that
takes
us
over
to
cig
updates.
Now.
This
is
the
time
in
the
meeting
where
we
have
several
sikhs,
come
in
chairs
from
different
things
and
tell
us
provide
updates
about
what
they're
working
on
currently
so
our
first
sig
chair.
We
have
chris
hodge
from
cig
cloud
provider.
I
will
invite
you
to
the
floor:
hey
Chris!
How
you
doing?
Are
you
wonderful?
Take
it
take
it
away.
Thank
you
all.
G
G
So
just
a
quick,
a
brief
I
guess
talking
about
our
mission
a
little
bit
for
those
who
don't
know
the
the
sick
cloud
provider
is
kind
of
the
the
parent
sig
of
all
of
the
individual
providers,
and
our
goal
is
to
is
is
to
work
on
things
within
kubernetes
that
are
common
across
all
the
different
cloud
providers
and
in
particular
you
know
how
cloud
providers
are
loaded
and
used,
and
you
know
and
shared
documentation
and
things
that
are
you
know,
kind
of
interest,
all
of
the
group's
so
for
the
last
cycle.
We've.
G
Actually,
the
biggest
thing
that
we've
been
working
on
is
the
kubernetes
cloud
provider,
extraction
and
migration.
Our
tech
leads
on
this
have
been
Mike,
croute
and
Walter,
fender
and
and
right
now,
it's
our
highest
priority
and
the
idea
is
kubernetes
is
meant
to
serve
as
kind
of
a
an
application
orchestration
kernel
and
rather
than
have
the
functionality
for
all
of
the
different
providers
and
all
of
the
different
tools
in
there.
G
Instead,
it
can
load
that
functionality
and
and
and
you
know,
and
provide
the
services
with
different-
you
know-
give
services
with
different
providers,
but
as
it
stands
right
now,
there
are
a
number
of
different
cloud
providers
that
are
actually
deeply
integrated
into
the
kerbin
Nettie's
codebase
and
we're
working
on
moving
all
of
those
providers
out.
So
that
means
that
we
need
to
do
a
few
things.
The
first
is
is
we
need
to
have
an
interface
for
those
for
those
cloud
providers
to
enter
to
interact
with
and
network
is
pretty
much
mostly
done.
G
Many
of
them
have
have
integrated,
fairly
deeply
and
we're
working
to
remove
all
of
those
internal
dependencies,
and
you
know,
and
and
and
things
on
code
migrate,
those
to
staging
so
that
all
of
the
providers
can
be
removed
from
that
code
in
a
clean
way.
That's
not
going
to
break
existing
users
and
also
provides
some
sort
of
migration
path.
G
So
we've
got
a
pretty
long
work
list
on
this.
It's
still
in
progress,
then
it
remains
our
highest
priority
for
1.15.
But
if
you
take
a
look
at
some
of
the
accomplishments,
there's
actually
a
pretty
long
list
of
things
that
people
have
worked
on
and
that
and
then
we've
we've
done
on
moving
this
out
right
now
on
this
list,
there's
actually
only
one
thing
that
isn't
completed.
No,
that
doesn't
mean
that
that's
the
work
is
done.
G
There's
still
a
lot
ahead
of
us
to
do,
but
it's
been
moving
along
really
quickly
and
you
know
and
there's
there's
been
a
tremendous
amount
of
effort
and
work
put
into
that
and
I
want
to
take
a
moment
to
say
thanks
to
all
of
the
contributors
who've
been
working
on
those
on
on
those
issues.
This-This-This
certainly
isn't
a
full
list,
but
it's,
but
it's
a
list
of
the
people
who
have
taken
ownership
of
those
issues
and
and
provided
you
know,
different
implementation
paths.
For
that.
G
So,
what's
next
for
one-fifteen
well
another
one
of
our
big
big
projects
that
we're
working
on
right
now
is
restructuring
all
the
different
cloud
provider
SIG's
into
something
else
we
presented.
We
we
made
a
proposal
which
is
which
is
linked
in
this,
which
is
linked
in
this
slide
deck,
and
we
can
also
add
to
the
links
within
the
the
meeting
notes
that
was
presented
to
the
steering
committee
yesterday
at
their
at
their
public
meeting
breaking
down
the
proposal
there.
There
are
a
few
major
points.
G
This
is
going
to
involve
a
lot
of
different
change
happening
throughout
the
organization,
such
as
changing
labels
to
you
know,
provider,
cloud
provider
or
tags
to
provider,
/
the
cloud
provider
name
now.
All
of
the
cloud
providers
certainly
enjoy
all
the
benefits
of
the
SIG's,
and
these
include
things
like
time
at
conferences,
slack
channels,
you
know
being
able
to
use
the
CI
and
our
proposal
is
that
these
don't
change
entirely.
G
You
know,
and
also
just
access
to
things
like
you
know,
following
the
release,
standards
and
participating
within
tested
test
grid
reporting
and
require
you
know
and
having
documentation
and
making
the
documentation
consistent,
but
there's
so
much
more
than
this.
So
if
you're
interested
in
seeing
it,
you
should
go.
Take
a
look
at
the
proposal.
You
know
feel
free
to
comment
on
it.
It
was
largely
accepted
during
the
meeting
yesterday,
so
we're
going
to
be
moving
forward
to
implement
this
with
our
plan
to
have
a
full
implementation
by
cube.
G
G
What
else
is
happening?
The
1.15
we
have.
We
have
a
bunch
of
things
in
progress
that
we're
gonna
be
working
on,
such
as
improving
the
API
server
network
proxy
to
replace
the
deprecated
SSH
tunnel
system,
that's
required
for
removing
the
entry
cloud
provider.
There's
work
on
an
out
of
tree
image,
credential
provider.
G
You
know
to
help
keep
the
refactoring
from
breaking
their
dependency
on
the
cloud
provider
and
we're
looking
at
a
next
step
of
a
kept
in
an
alpha
implementation
for
version
1.15,
as
well
as
the
ongoing
task
of
improving
improving
cloud
provider
documentation
within
our
list
of
tasks
for
this
cycle.
All
of
the
owners
of
the
different
cloud
providers
have
been
assigned
to
these
as
highest
priority
for
work
to
get
down
and
1.15,
as
well
as
migrating
h,
a
clusters
to
use
the
cloud
controller
manager,
one
of
the
tricky
things
about
this
is
a
group
leader.
G
Election
is
not
as
efficient
as
it
could
be
for
H
a
Walter
could
go
into
some
depth
about
what
this
is.
But
basically,
if
a
new
leader
is
elected,
the
old
leader
has
to
kill
all
but
Sol
processes,
because
because
you
can't
know
if,
if
processes
have
been
spawned
that
are
going
to
then
create
race
conditions,
so
we're
looking
at
possible
ways
to
improve
our
AR
or
use
a
bill.
Our
performance
on
that
okay,
as
I
talked
about
earlier
entry
cloud
providers,
are
going
away.
G
So
our
goal
is
to
have
all
the
entry
providers
to
have
a
migration
path
out
of
kubernetes
by
the
end
of
the
year.
So
if
you
are
producing
a
project
right
now,
where
you
have
a
choice
between
the
entry
or
the
external
provider
start
using
the
external
provider,
it's
if
you're.
If
you
know,
if
your
provider
there
and
it's
supported
and
it
exists,
making
that
transition
should
be.
It
should
be
fairly
easy
for
you,
but
there
are
actually
a
lot
of
products
out
there
that
are
maybe
running
clouds
right
now.
G
G
Finally,
if
you
depend
upon
the
overt
cloud
stack
and
photon
cloud
providers,
those
have
been
deprecated
and
those
are
being
removed.
As
far
as
we
know
there,
there
isn't
anybody
right
now
who
is
maintaining
those
external
providers,
and
so,
if
those
are,
if
you
depend
upon
those,
then
please
get
in
touch
with
us
and
get
involved
with
implementing
an
external
provider,
so
that
report,
so
that
support
will
stay
in
place
for
all
of
you.
G
So
we
have
lots
of
things
that
you
can
contribute
to
to
this
cycle.
I
don't
want
to
list
list
them
all
here,
but
but
if
you
go
to
our
our
kubernetes
repository
cloud
provider,
slash
issues
you
can
see
what's
available
to
be
done,
but
particularly
documentation.
If
you
are
in
a
cig-
and
you
want
to
help
document-
please,
you
know
help
with
the
documentation
and
actually
I
will
list
some
of
the
things
that
are
available
right
now.
These
are
current
tasks
that
are
issues
for
the
next
cycle,
but
haven't
been
assigned
yet.
G
So,
if
you
are
looking
to
make
some
contributions,
these
would
be
some
great
places
to
get
started
and
you
can
come
find
us
on
over.
On
slack
at
cloud
at
cig
cloud
provider,
the
chairs
of
the
sig
are
andrew,
psych
him
Jago
MacLeod
and
myself,
but
I
also
want
to
make
sure
that
I
mentioned
the
the
leadership
of
nished,
Davidson,
Walter,
fender
and
Michael
Carruth,
who
have
all
really
put
in
a
lot
of
work
to
help
me
move
a
lot
of
these
initiatives
forward
and
and
have
really
helped
drive
the
stake.
G
G
We
have
our
homepage
at
the
or
at
the
communities
cloud
provider,
a
slack
channel,
it's
a
cloud
provider
as
well
as
the
mailing
lists.
If
you
want
to
join
our
meetings,
we
have
bi-weekly
meetings
on
Wednesdays
at
1:00
p.m.
and
then,
if
you
want
to
be
involved
in
the
cloud
provider
extraction
process,
we
have
weekly
meetings
scheduled
for
Thursdays
at
1:30
p.m.
in
Pacific
time
and
with
that
I
will
open
it
to
questions
or,
if
I
missed.
Anything
and
Andrew
wants
to
jump
in
I'm.
Happy
to
you
hear
about
that.
Also.
A
There
was
one
question
by
Joe
in
the
chat,
but
I
think
Andrews
answered
that.
Ok.
Thank
you
very
much
Chris.
Thank
you.
Thank
you
very
much
sinks
out
the
water,
fantastic,
ok,
so
that
takes
us
over
to
our
next
cig
update,
which
we
have
Daniel
Smith
lava-lamp
running
cig
API
machinery
I'll
hand
it
over
to
Daniel
Thank
You,
Daniel
hi.
H
B
You
I
should
start
at
the
beginning
here:
hi,
I'm
I'm
daniel
smith,
I
work
for
Google.
My
co-chair
is
David
EADS
right
hat,
and
it's
my
turn
today.
So
this
is
the
API
Machinery
a
cig
update
that
we'll
start
off
with
what
do
we
do
that
cycle,
the
most
notable
thing,
or
at
least
the
thing
that
may
be
the
most
happy,
perhaps
is
a
server
side
apply
as
an
alpha
now.
B
So,
if
you
have
ever
used
the
cube
control
apply,
give
us
a
try,
I
think
it's
pretty
great,
there's
some
recorded
demos
in
API
machinery,
sig
meetings.
If
you
want
to
see
what
it
looks
like
yeah,
so
that's
pretty
cool.
We've
made
some
progress.
Our
other
goals,
CRD
schemas.
They
now
get
published
into
open
api,
so
so
that
discovery
can
work
at
least
that's
nova
and
we
have
a
path
to
get
storage
migration.
That
is,
api
objects
like
when
you
update
your
kubernetes
version
and
versions
change.
B
There's
an
example
kept
out
for
adding
Union
types
that
is
like
one
of
maybe
the
best
example
in
the
kubernetes
api
is
like
all
the
long
list
of
volume,
types
that
you
can
put,
but
a
volume
so
we'd
like
to
regularize
that
and
make
it
easier
for
clients
that
are
possibly
on
a
different
version
of
the
schema
to
to
work.
Yeah.
So
go
read
that
couple.
B
If
you're
interested
in
that
see
our
deconversion
web
folks,
those
are
going
to
bethe
very
soon
and
then
they
will
go
to
GA
along
with
CRTs,
which
is
the
next
line.
Item
is
we'd
really
like
Sierra
T's
to
exit
beta
and
go
to
GA,
so
there
will
be
a
kept
for
this
very
shortly.
Yeah.
That's
super
important
to
us
yeah.
B
Anything
you
can
think
of.
We
wanted
to
go
to
GA
a
couple
other
things
that
we're
working
on
security
features
proxying
or
allowing
you
the
ability
to
proxy
traffic
from
api
server
right
now,
api
server
traffic
it
drops
onto
the
network
and
with
with
no
like
no
guarantee
about
the
destination.
So
you
don't
know
is
this
packet
intended
for
NCD?
Is
it
a
packet
intended
for
unknown
in
the
cluster?
B
So
we're
gonna,
add
a
classification
system
so
that,
in
the
unlikely
event
that
somebody
tricks,
your
API
server
into
emitting
a
network
request
that
it
shouldn't
that
will
get
classified
with
the
path
that
it
was
intended
to
go
to
and
will
not
be
able
to
talk
to
something
that
it
shouldn't
basically
won't
be
able
to
sneak
in
a
call
to
a
TD.
That
was
if
your
traffic
was
really
intended
to
go
to
some
web
book.
So
there's
a
kept.
It's
not
implementable
yet,
but
it
was
just
like
the
first
draft
was
approved
emerge.
B
Somebody
could
ask
me
a
question
here:
no
okay
and
the
last
thing
on
the
list,
the
last
but
not
least,
we're
working
on
rate
limiting
this
is
like
capping
or
intelligently
capping,
hopefully
intelligently
capping
the
the
number
of
requests
that
come
in
to
API
server
right
now,
it's
just
a
blanket
like
if
we
have
400
requests
in
flight
like
start
rejecting
them,
so
we're
gonna,
try
and
distribute
that
more
fairly,
so
that
a
bad
actor
can't
consume
all
of
the
all
of
the
available
request.
Fota.
So
that's
a
iver!
B
C
B
All
these
plans
effect.
You
extensibility
plans,
we're
finalizing
them
any
minute
now
so
yeah.
If
you're
interested
in
this
or
you
vitally
need
some
feature.
Some
extensibility
thing
get
in
touch
very
soon.
Show
up
on
our
mailing
list.
Come
to
the
cig
meeting
yeah
we
yeah.
If
you
want
input
now,
is
the
time
don't
wait?
The
template
now
had
a
like.
It
was
supposed
to
give
a
status
update
for
every
sub
project.
B
Look
first
on
the
list,
and
this
list
is
I,
don't
think
it's
in
a
particular
order:
server,
binaries,
so
API
server,
controller
manager,
cloud
controller
manager
and
then
supporting
libraries
and
and
again
this
is
not
necessarily
the
contents
like
we
don't
own,
every
individual
API,
we
don't
own
every
individual
controller,
but
the
enclosures
that
the
Iranian
seems
to
be
owned
by
the
sick.
B
We
do
own
a
few
particular
api's
in
a
few
particular
controllers
and
those
are
rolled
up
in
the
control.
Plane
features
sub
projects
like
the
garbage,
collector,
the
namespace
lifecycle
controller,
the
quota
system,
the
upgrade
system,
the
storage
migrator
we
own,
those
we
also
own
a
universal
machinery
like
the
API
machinery
package,
if
you've
ever
interact
with
the
client
go.
You've
probably
seen
this
in
your
import
list.
This
is
aspects
of
our
API
machinery
that
are
necessary
for
both
client
and
server
server
frameworks.
B
Api
server,
which
we
use
unpacked,
the
cube,
API
server
binary,
actually
runs
three
API
servers.
It
runs
the
aggregator,
the
API
server
with
the
built-ins
and
the
extensions
API
server
which
one's
your
CR
DS
so
not
as
commonly
known
as
maybe
we'd
like
it
to
be
so
interesting
and
those
are
all
those
are
all
built
off
of
this
API
server
framework
series
we
own
the
extensions
API
server,
which
is
the
name
of
the
thing
that
serves
the
series
we
own.
B
The
aggregator,
which
is
like
the
fedora
and
permits
you
to
like
distribute
API,
is
among
different
API
server
binaries,
which
is
used
to
support
the
metrics
API
server.
Among
other
things,
we
own
the
server
well,
but
I've
called
the
server
SDK
project,
which
is
like
samples
and
tools
to
help
you
build
api's
and
API
servers.
They
have
a
separate
meeting
for
queue
builder
and
the
controller
runtime
I
can
share
these
slides
later
because
you
can't
read
this
link,
but
if
you
go
to
actually
I
can
click
on
it.
You
go
to
this
page.
B
So
if
you
burn
reduce
communities,
it
gave
you
a
machinery
you
can
see.
They
have
links
so
well
else,
there's
a
whole
aspect
of
our
API
system,
which
is
taking
schemas
and
turning
those
into
useful
things
like
clients.
So
we
have
Gengo,
which
is
library
that
I
guess
I
guess
it
was
me
that
I
started
way
back
in
the
early
days
of
kubernetes
that
parses
go
and
files
and
helps
you
generate
output.
We
have
code
generators
built
off
of
that
that
do
like
the
deep
copy
generators
and
all
that
stuff.
B
We
have
the
structured
merged.
If
library,
which
is
something
we
wrote
to
support
the
the
new
apply,
we
put
all
the
business
logic
there
so
that
it
will
be
easy
to
use
in
other
contexts.
The
kubernetes
client
sub
project
has
a
really
long
list
like
like
10
15,
different
clients,
so
yeah
I
think
our
newest
addition
is
Perl,
I
guess
some
people
really
want
a
pro
client
as
I
signed
off
on
it,
with
the
expectation
that
I
will
not
personally
have
to
write
any
Perl,
so
you're,
all
safe
and
yam.
B
B
Lightning
tour
of
our
sub
projects-
again,
perhaps
I,
will
not
go
through
each
one,
but
if
you
are
interested,
if
you
just
look
for
caps
with
the
label,
Sega
API
machinery
there's
these
pr's
open
in
particular
here's
the
one
of
kept
that
I've
mentioned
and
there's
the
rate-limiting
stuff
is
also
called
goes
by
the
names
priority,
fairness
concurrency,
so
you
can
see
there
are
several
goes.
So
if
any
of
those
things
interest
you
please
go
look
at
the
caps
and
comment
on
them
or
mention
them
on
our
mailing
list.
B
Working
group
status
apply
I
already
mentioned
where
we
have
a
roadmap
for
how
to
get
apply
to
beta.
So
the
team
is
executing
on
that
and
yeah,
which
should
be.
It
should
be
pretty
cool
if
you're
interested
in
that
there
is
a
separate
working
group
apply
meeting
every
other
Tuesday
morning
morning,
a
Pacific
time
join
the
mailing
list.
B
If
you
are
interested,
how
can
you
contribute
we're
gonna,
try
and
experiment
actually
and-
and
my
team
here
at
Google
we've
been
running
bug,
triage
meetings
for
like
the
last
year
and
a
half
and
it
occurred
to
us
actually
occurred
if
any
other
day?
Well,
why
not?
Why
not
open
those
up
so
we're
gonna?
Try
an
experiment.
I
pull
at
the
risk
of
repeating
myself
join
our
mailing
list.
B
If
you're
interested
in
joining
joining
us
with
that
I
think
API
machinery
has
somewhat
difficult
to
understand
issues
and
we've
mostly
done
the
low-hanging
fruit,
so
it
can
be
difficult
to
interact
with
the
project
and
I
think
just
sitting
and
watching
us
go
through
issues
and
learning
by
osmosis
might
actually
not
be
a
bad
way
to
to
learn
more
and
last,
where
you
can
find
us
I'm
lava
lamp.
You
can
also
talk
to
David.
B
A
Ya
know
I
think,
thank
you
very
much
Daniel.
Thank
you.
I
wanted
to
thank
Daniel,
both
Daniel
and
Chris,
for
providing
wonderful,
CGI
updates
for
us
all
in
the
community.
Very
much
appreciated.
I
know
a
lot
of
time
goes
into.
Making
that
update
happen,
so
I
just
want
to
give
them
a
round
of
applause.
So
thank
you
very
much.
This
brings
us
to
the
the
last
part
of
the
stated
agenda
agenda,
which
is
announcements
and
then
shoutouts
announcements.
A
You
can
go
to
the
agenda
and
look
these
up,
but
we
have
office
hours
next
week,
so
there
on
Wednesdays,
if
you're
interested
in
getting
involved
in
the
office
hours,
please
ping
George
George.
Do
you
have
any
other
comments
about
that
nope
all
set
to
go
all
set
to
go?
Thank
you
very
much.
There
is
also
a
link
to
a
Windows
containers,
kubernetes
poll,
so
the
folks
in
sync
using
windows,
are
looking
to
learn
more
about
windows
use
cases.
A
So
if
you
are
somebody
that
is
interested
or
is
already
using
windows
and
kubernetes,
please
go
over
and
provide
some
feedback
to
seek
windows
as
to
how
you're,
using
that,
so
that
they
can
better
assist
and
drive
the
roadmap
for
those
use
cases.
Thank
you
very
much.
The
link
is
in
the
agenda
last
announcement
here
is
cluster.
Api
now
has
category
for
descripted
for
discussions.
A
If
you
want
to
join
in
so
there
is
a
link
there
is
that
under
discuss,
George
I
assume
okay,
so
they
can
go
to
discuss
and
start
discussing
all
your
cluster
API
related
issues.
So
thank
you,
George,
okay.
So
this
brings
us
to
shoutouts
time.
So
for
those
who
aren't
aware
in
the
kubernetes
slack,
that's
actually
a
shoutouts
channel
and
that
shoutouts
channel
is
about
saying
thank
you
to
other
people
in
the
community
for
their
help
or
tireless
efforts.
A
So
if
somebody
has
helped,
you
feel
free
to
go
pop
into
that
channel
and
give
them
a
shout
out,
and
this
I
will
take
a
moment
and
actually
pull
out
some
of
those
shoutouts
from
this
week.
So
we
have
Valerie,
has
a
shout
out
to
AE
sky
Kim
for
helping
me
get
a
cube
proxy
bug
fix
out
the
door,
so
I
know
Andrews
on
this
call.
A
A
Another
one
from
JD,
ba
shutout
to
Catherine,
for
helping
out
with
the
recent
Bosco's
deployments,
we've
needed
for
whirring
up
automated
e2e
tests
for
the
cluster
API
sub
projects,
so
Thank
You
Catherine
for
doing
that.
We
also
take
a
moment
just
to
take
a
look
at
all
the
people
on
Stack
Overflow
that
are
answering
the
plethora
of
questions
with
that
ATAG
with
kubernetes.
So
I'm
going
to
call
out
just
some
of
the
top
ten
answers
on
the
Kuban.
It
is
tag
and
thank
them
for
their
efforts
there.
So
we
have
Frank
you
Changu.
A
We
have
Eduardo
by
Telo
Rico,
cookie,
dough,
Janus,
Len,
Len,
art,
P,
Ekambaram,
harsh
man
ver.
We
have
four
C,
seven,
four,
three,
five,
six
B
for
one.
Thank
you
for
your
efforts,
a
underscore
sue,
Leandro,
don't
Donizetti
saw
res.
So
thank
you
very
much
for
helping
out
and
if
you're,
on
those,
if
you're
in
Stack
Overflow.
Thank
you
for
your
tireless
effort.
There
are
a
lot
of
questions
in
there.
If
you're
going
to
take
a
look
or
are
using
it
so
nice
usernames,
hopefully
I
read
them
okay
and
did
the
names
justice
apologies.