►
From YouTube: Kubernetes Community Meeting 20190822
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
A
We
have
a
short
but
high-quality
agenda
for
you
today,
but
before
we
get
started
a
few
of
the
typical
announcements.
First
of
all,
this
meeting
is
recorded.
So
keep
that
in
mind
when
you
are
asking
questions,
I,
don't
put
anything
on
there
that
you
don't
want
to
share
with
the
YouTube
viewing
public.
Second,
like
oh
kubernetes
meetings,
this
meeting
is
subject
to
the
kubernetes
code
of
conduct.
A
B
Hey
everyone,
I'm,
Jared,
Dillon
and
with
me,
is
Denis.
Do
you
know
and
we
work
together
at
atmosphere
now
known
as
DT
IQ
and
today?
Next,
please,
today,
I
want
to
talk
a
little
bit
about
Kudo,
which
is
the
kubernetes
universe,
a
declarative
operator.
The
Kudo
is
a
toolkit
for
writing
operators
and
really
what
we're
looking
to
do
is
focus
on
to
orchestration
of
really
complicated
staple
services,
and
these
sides
are
posted
up
on
the
community
meeting.
B
So
if
we
look
at
the
operated
developer
landscape
today,
there's
there
some
challenges
with
writing.
Operators
and
a
lot
of
those
are
around
the
fact
that
you
have
to
maintain
a
code
base
separate
from
the
code
base
of
the
thing
that
you're
operating
right.
You
have
to
write
a
whole
operator
and
there's
great
toolkits
to
do
that,
but
the
the
toolkits
may
not
be
in
the
core
competency
domain
of
whoever's.
B
Writing
these,
and
so
you
have
to
bring
up
a
team
of
go
developers
to
work
with
cube,
build
or
operator
SDK
to
to
re-implement
a
lot
of
the
ops
tools
that
someone
like
the
center
of
MySQL
others
have
already
built
and
there's
there's
a
high
maintenance
burden
of
that.
You
have
a
whole
separate
release
process
for
your
release,
process,
etc.
Next
slide,
please,
for
the
users
running
Stanford
workloads
is
still
pretty
complicated,
others
and
there's
not
a
good,
well
trodden
path.
B
Yet
for
this
and
there's
different
workflows,
different
API
is
there's
no
consistent
set
of
CR
DS
and
you
end
up
with
a
lot
of
controller
sprawl.
As
you
go
about
this
process
next
slide,
please.
So
we
set
out
to
sort
of
start
or
we
set
out
to
start
fixing
this
with
Kudo,
and
we
started
with
an
abstraction
over
lifecycle
operations.
So
this
is
not
software,
where
you
can
just
necessarily
throw
a
bunch
of
manifests
at
kubernetes.
Wait
for
it
to
fail
to
come
up.
You
need
sequencing.
B
You
need
much
more
interesting
lifecycle
hooks
around
that
kudo
is
a
polymorphic
operator,
I'll
talk
about
a
little
bit
more
in
a
moment,
but
really
what
we're
focused
on
is:
okay,
I
have
my
components:
I
have
Cassandra
running.
What's
next
right,
and,
and
so
we
provide
a
whole
bunch
of
tools,
run
writing
custom
plants
to
handle
things
like
backups
restores.
Add
a
database
stuff
like
that.
Anything
that
should
be
representable
by
your
existing
tooling
should
be
doable
via
kudo
next
slide.
Please,
and
for
those
actual
users
using
it,
we
provide
a
cube,
CTL
plugin.
B
That
then
goes
and
translates
all
these
CRTs
all
these.
This
work
you've
done
in
to
both
imperative
and
declarative
shells
for
users
to
really
easily
have
railroaded
applications
for
that
application,
and
we
also
handle
some
suffering
the
leaky
abstraction
there,
that's
natural,
but
but
really
we
want
people
to
deploy
these
and
be
able
to
monitor
and
work
with
them
long-term
next
slide.
B
Please
now
what
we're
doing
in
the
upcoming
version
is
we're
actually
taking
all
of
this
imperative
work
that
you
do
when
you're
declaring
an
operator
and
I,
don't
have
it
converting
that
into
CRTs,
and
we
want
to
do
this
not
only
for
the
components
of
your
application
but,
like
I,
said
all
the
operations,
and
so
you
can
imagine
a
world
where
everything
you
can
do
with
MySQL
is
represented
as
a
see
Ernie
so
in
index.
I
have
coffee
up
here.
Right
kafka
topic
could
be
a
CRT
with
a
set
of
plans.
B
That
then
goes
and
creates
that
topic
there's
a
lot
of
benefits
here.
Right
one
is
that
as
an
ISV,
I
am
shipping,
my
software
with
its
own
run
book,
and
not
only
that
I
can
start
taking
advantage
of
all
the
kubernetes
primitives
along
the
way,
so
that
my
Kafka
topic,
or
not
only
my
cluster
but
its
topology,
is
subject
to
all
the
rules
of
kubernetes,
and
that
includes
things
like
backing
up.
All
of
my
CRTs
includes
things
like
git.
Ops
includes
hooking
into
kubja.
B
Days
are
back
now,
I
can
start
to
say
who
can
create
a
coffee
topic
and
who
can't
using
the
tooling
that
I
already
have
as
an
operator,
and
in
doing
that,
we
want
to
make
sure
that
that
leaky
abstractions
can
still
be
imperative.
For
example,
in
talking
about
this,
with
stores
are
still
really
hard
to
do.
Declaratively
there's
still
an
imperative
thing,
so
we
can
back
off
on
that
in
certain
areas
and
let
people
to
decide
on
where
next
slide,
please.
B
So
from
the
beginning,
we're
open
governance
model
we're
based
around
a
lightweight
form
of
caps.
We
call
them
kudo
enhance
my
proposals
just
because
we
don't
have
SIG's
and
whatnot,
but
it's
largely
the
same
process
and
really
we
want
to
bring
contributors
in.
We
have
a
bunch
of
reference
operators
with
some
community
operation
progress
we
release.
Often
we
have
a
good
community
going
with
some
contributors
and
some
multiple
people
in
the
ecosystem
interested
in
this
and
we're
working
on
donating
into
the
CNC
up
as
the
sandbox
project.
B
We
were
actually
going
to
present
that
on
Tuesday
it
got
moved
on,
but
we're
going
to
be
presenting
that
next
month
and
so
that
PR
is
open
to
move
it
into
the
CNCs.
That's
perfect
yeah,
and
so
some
quick
comparisons.
There's
a
lot
of
ways
to
develop.
Operators
go
back
one
perfect
versus
operator,
SDK
and
queue
builder,
and
by
the
way
we
love
all
these
frameworks.
But
we,
these
slides,
are
intended
to
show
where
Kudo
is
optimized
for
and
there's
plenty
of
reasons
not
to
use
Kudo
versus
one
of
these
other
frameworks.
B
So
compared
to
one
of
these
traditional
frameworks
like
operator,
SDK
or
queue
builder,
Kudo
is
polymorphic.
There's
one
Kudo
controller,
one
set
of
kudo
controllers
for
multiple
types
of
operators
and
we
configure
BSC,
Rd
and
webhooks,
not
via
your
writing,
writing
go
and
so
our
test
harness
everything
uses
kubernetes
primitives.
Instead
of
writing
software,
we
we
believe
we're
handling
90%
of
that,
whereas
the
queue
builder
operator
SDK
handles
60
to
70%
of
you
for
you
to
write
from
scratch
and
so
we're
an
abstraction
level
on
top
and
in
fact,
we're
built
on
top
of
queue.
B
Builder.
Next
slide,
so
not
another.
The
actual
comparison
is
Mehta
controller
for
those
who
haven't
seen
it
it's
a
project
by
Google
and
it's
another
polymorphic
controller
kudos
a
lot
more
concerned
with
the
full
orchestration
of
CRD
life
cycles.
So
we're
doing
a
lot.
Our
own
schema
loading
we're
actually
managing
CRTs
for
you
in
the
sense
of
reference
County,
that's
a
TBD
feature
and
we're
looking
a
lot
more
at,
like
I,
said
day,
2
operations
and
components
of
applications,
and
not
just
how
I
run
my
software
and
then
versus
how
I'm
another
natural
comparison.
B
We
will
again
it's
about
the
day-to-day
operations.
It's
about
drift,
ejection,
it's
about
repair!
You
can
actually
use
helm
charts
in
the
next
version.
Two
based
Kudo
operators
around
to
take
a
helmet
art
and
lifecycle
hooks
to
it.
So
that's
kind
of
a
nice
additional
feature.
We
don't
see
ourselves
in
contention
with
something
like
helm
or
any
of
the
operators,
but
rather
something
you
can
layer
other
operations
on
top
of
next
slide
it.
How
much
of
your
time
I
give
two
minutes.
Okay,
so
the
rest,
these
slides
are
up
here.
B
B
C
So,
thank
you
so
hi
everyone
so
I
have
this
blister
in
115
to
running
with
nothing
deployed
there
and,
like
Jared,
said
one
of
the
nice
things
we
speedo
it
that
you
get
a
single
comforter
that
you
use
for
all
the
different
instances
of
all
of
the
difference
services
you
want
to
to
deploy
with
it.
So
basically
you
just
deploy
this
single
controller.
It's
very
easy!
It
takes
like
grabs
one
minute
or
two
and
or
even
less
like
30
seconds,
and
you
get
your
comforter
up
and
opening
in
your
cluster,
and
that
would
be
it.
C
You
would
not
need
any
specific
controller
for
Kefka
or
for
zookeeper
for
anything
else.
So
I
just
wait
for
you
see
it's
already
there
and
now
oxido
is
there
on
Chloe's
ready
it
just
took
like
30
seconds.
Then
you
want
to
deploy
a
zookeeper,
it's
as
simple
as
you
do
choose
ETF
you
do
install
zookeeper
and
the
name
of
the
instance
you
want
to
use
and,
like
Georgia
says,
we
have
this
kind
of
plan
that
you
can
use
to
follow
the
execution
of
the
deployment
of
your
cluster
to
run
it
again.
C
Sometimes
I
have
a
bad
network
connection
at
home,
but
it's
good
and
then
now
that
it's
deployed
I
see
I
have
this
plan
deployment
plan
that
is
in
progress
and
I
can
also
take
a
look
at
the
pod
and
I.
It
should
be
able
to
start
to
see
my
sweet
sue
keeper
pods
being
created
and
they
should
be
there
in
a
second
and
then
you'll
be
able
to
see
that
the
plan
will
show
everything
as
completed.
C
So
we
just
deployed
this
zookeeper
because
we'll
be
used
by
Khafre
cluster,
so
just
going
through
that
again
just
need
some
time
to
work
these
container
running
and
we'll
get
the
zookeeper
ready,
and
the
next
step
will
be
that
we
go
and
deploy
an
instance
of
kafka
service
again
very
easily.
Just
you
do
install
kefka
with
the
specific
version
and
that
will
use
the
zookeeper
we
just
deployed,
so
it
should
be
ready
there.
Yeah
I
can
now
deploy
my
calf
care
cluster.
It's
this.
C
This
is
the
version
of
the
operator
that
contained
the
calf
care
2
to
1,
and
then
we
have
another
version
of
it
that
content
that
is
deploying
calf,
Cal
truth.
Video
and
actually
you
just
later,
we
can
do
an
upgrade
from
one
version
to
another,
but
again
as
soon
as
you
start
it
to
deploy
this
captain
stance,
you
can
use
the
do
plan
status
to
see
the
progress,
and
you
can
also
see
the
pods
being
created
by
the
state
full
sets.
C
So
just
waiting
for
these
parts
to
be
ready,
you
see
the
first
one
is
already
there.
This
one
is
being
created,
and
what
I'm
going
to
do
is
that
I'm
going
to
stop
this
producer,
this
consumer
look
at
the
logs
to
see
that
the
messages
are
coming
and
then
I'm
going
to
show
you
different
things
that
I
can't
show
you
even
now
with
that.
These
are
the
theories
that
are
currently
being
created
and
in
future
version
we
will
use
dynamics
here.
C
These
you
would
get
see
artists
like
Jared,
say
for
topics
and
for
many
different
things.
You
want
to
be
present.
If
you
want
to
see
what
has
been
deployed
by
you,
it's
as
easy
as
doing
that
cooks,
it
he'll
get
instances
and
you
just
get
these
zookeeper
and
calf
consents
displayed
and
in
the
same
way
you
want
to
get
more
details
about
what
is
the
version
of
the
calf
dr.
Otto
being
deployed?
You
just
use
the
standard
tube
CTL
commands
with
this
ceoddi.
A
C
You'll
be
able
to
see
that
you
have
the
version
0
1
2,
that
is
being
deep
here
and
again,
just
taking
a
loot
at
the
departs.
They
should
now
be
all
running.
So
I
can
start
to
have
my
producer
here
and
I
can
consume
the
messages
that
just
to
double
check
that
everything
is
is
working
well
and
I
should
be
able
to
check
the
logs
just
probably
need
to
wait
for
this
container
to
be
created.
If
you
just
take
shoot
wheel
before
it
is
running
to
creating.
C
And
you
see,
I
can
start
to
see
the
messages.
What
I'm
going
to
do
now
and
I
know
that
we
are
quite
late,
I'm
going
to
do
an
upgrade
from
one
version
to
another,
just
to
show
you
how
easy
it
is
to
get
from
one
version
to
another
and
then
after
that,
I'm
going
to
patch
the
current
set
up
so
that
I
go
from
three
workers
to
five
workers,
but
because
we
are
quite
late.
B
That
sounds
good
and
I'll
just
provide
one
piece
of
color
and
then
turn
to
questions
so
the
upgrade
and-
and
some
of
these
were
custom
plans.
So
for
something
like
a
zookeeper
@cd,
you
need
to
actually
run
API
actions
in
order
to
add
numbers,
and
so
we
we
handle
the
case
where
it's
not
as
simple
as
just
increasing
your
number
of
replicas
right.
A
We're
actually
kind
of
over
time
at
this
play.
Okay,
sorry
about
that.
Thank
you
to
the
time
yeah.
So
we're
going
to
cut
that
off.
If
people
have
additional
questions,
so
one
question
has
already
been
answered
in
chat.
If
people
have
additional
questions,
please
post
them
in
the
route
chat
or
the
contact
information
for
Jared
and
Denise
is
in
the
notes.
A
D
Josh
coming
to
you
live
from
open
source
of
North
America
Wi-Fi.
So
if
I
cut
out,
everything
is
in
the
agenda
I'm
in
the
expo
hall.
So
having
a
wonderful
time
here,
lovely
to
see
you
all
updates
from
release
manager
beta
one
of
one,
sixteen
was
released
on
Tuesday
August
20th.
So
if
you're
interested
in
seeing
the
latest
features
that
are
coming
out
later
in
September
and
the
116
released
that
now
is
your
chance
to
get
your
hands
on
them.
Upcoming
milestones
for
the
116
release.
D
In
by
tomorrow,
so
it
does
not
need
to
be
completed,
although
you
need
to
have
the
place
on
the
PR
in
place,
otherwise
you
will
start
to
get
hit
a
wall
with
getting
your
enhancement
in.
So
please
make
sure
that
that
happens.
Other
call
to
action
is
end
of
next
week,
Friday
next
week,
which
is
August
29th.
We
have
the
code
freeze,
so
if
you
have
enhancement,
all
code
needs
to
be
merged
into
kubernetes
kubernetes
by
next
Friday.
If
that
does
not
happen,
you
need
to
go
through
an
exception
process.
D
So
please
work
towards
that
goal.
It
would
be
very
much
appreciated
by
the
116
release
team
patch
releases.
We
had
a
patch
release
earlier
this
week
on
Monday
of
all
the
supported
releases,
so
one
dot
15.3,
one
dot,
14.6
and
one
dot
13.10,
which
actually
fixes
two
CVEs
related
to
a
different
type
of
flood
against
the
http/2
interface.
So
if
you're,
we
recommend
you
get
them
patched
details
about
those
CVS
can
be
found
in
the
or
that's
specifically
opinion
and
reset
floods.
D
All
updates
and
patches
are
released
on
cumulative
dev
mailing
lists,
so
if
you'd
like
to
receive
these
updates,
please
go
ahead
and
join
that
google
group
and
you
would
like
that
email.
So
you
will
be
the
first
to
know.
That
is
the
updates
from
the
release.
Lead.
I
can
feel
questions
in
the
chat
I'll
be
around
for
another
five
minutes,
or
so
thanks,
Josh
and
I'll
hand
it
back
to
you,
mate
cool.
A
C
Okay,
right
now,
you
should
see
the
flight
deck
okay,
great
syrup.
So
thanks
for
inviting
me
I'm
one
of
the
chair
from
sixty
Skyblock
and
I
want
to
present
you
our
others
from
from
our
at
seek.
Basically,
we
can
start
with
the
changes
that
we
made
and
could
affect
you.
So,
first
of
all,
we
moved
our
repository
from
kubernetes
into
butter
into
compare
the
six
so
be
aware
that
we
update
your
particular
occasion
in
your
computer,
so
we
also
change
our
imports
without
moving
them
into
correct
path.
C
It
will
not
work,
and
second
thing
on
previous
meeting
Jatin
already
mentioned-
that
there
will
be
some
changes
under
leadership's
and
that's
true
right
now.
I
can
optionally
state
that
J
step
out
from
being
a
chair
me
I,
become
a
new
chair
and
also
to
achieve
you
should
become
a
new
approver,
so
welcome
on
board.
Cut
second
thing
is
that
we
enabled
end-to-end
testing.
C
Previously
tests
were
executed
on
Jenkins
when
Jenkins
was
removed,
we
lose
them,
and
why
are
talking
about
it
because
for
enabling
that
we
use
the
kind
project
as
far
as
I
know,
is
developed
under
C
testing,
and
it's
really
fast
and
easy
way
to
execute
into
n
tests,
prove
broccoli
and
on
CI
loss
of
its
nicely
to
proc
operation,
so
kudos
to
all
developers
and
contributors
from
the
project
and-
and
that
was
basically
about
changes
that
can
affect
you.
But
of
course
there
was
much
much
more
work
done,
but
there
are
indentation
details.
C
So
if
you
are
interested,
then
just
join
our
seek
meeting
what
so
that
was
about
changes
that
we
did
and
what
we
are
doing
right
now.
Basically,
we
are
doing
a
lot
of
base
ground
for
moving
aggregating,
API
server,
juicier
the
approach
and
believe
me
is
really
huge
change
for
us.
We
already
have
a
one
per
request
on
the
review
we
already
updated
documentation.
Also
under
the
kubernetes
is
ID
and
under
our
side
we
have
new
release
process.
C
Thanks
to
that,
we
can
release
the
CLD
from
master
and
also
provide
a
bug
fixing
for
all
implementation.
Special
things
to
other
of
our
who
work
on
that,
and
we
also
have
already
emigration
tool
and
the
documentation
are
in
place.
So
don't
worry,
don't
you
worry.
The
immigration
process
should
be
really
smooth
for
you
and
it's
done
already
automatically.
C
If
you
use
how
is
also
diagram
showing
you
what
is
doing
under
the
hood
and
just
contact
us
if
you
have
any
problems
with
it
or
any
questions,
and
also
we
are
planning
to
add
additional
pipeline
for
executing
a
brief
test.
So
thanks
to
that,
we'll
know
that
our
migration
process
is
working
correctly
and
how
those
plans
affect
you.
Basically,
the
old
implementation
with
API
server
right
now
will
be
available
only
on
the
Decatur
tv2
branch,
and
we
provide
bug
fixing
support
for
that
for
next
month.
C
So
one
thing
is
that
in
the
next
month
in
the
next
Monday,
we
are
planning
to
release
the
last
version
of
the
API
server,
which
will
contains
the
features.
After
that
we
provide
only
patches
for
bug,
fixing
and
to
be
able
to
support
to
version.
In
the
same
time,
we
needed
to
adjust
a
little
bit
our
art
names
artifacts.
So
basically,
master
branch
will
still
have
canary
and
latest,
and
those
artifacts
will
be
connected
with
new
version
with
Shirdi.
Basically,
we
purchase
0.3
current
milestone
for
us
and
about
the
old
implementation
with
API
server.
C
It
will
be
released
from
0
to
branch
and
we
will
have
additional
suffix
there.
So
if
you
want,
if
you
want
to
use
images
from
all
version,
then
do
not
forget
to
have
those
additional
subjects
say
thing:
with
the
ham
charts,
the
new
version
still
sits
under
the
catalog
name,
the
old
one
will
be
with
additional
suffix,
and
also
it
applies
to
SB
cat
binaries.
To
show
that
you
is
that,
for
example,
okay,
I
see
that
is
a
miss
Christian,
so
basically
that
guy
would
be
to
should
be
placed
here
and
basically
the
master
branch.
C
So
the
newest
version
will
be
installed
in
a
normal
way
and
install
service
catalog
and
upgrade
to
be
done
in
the
same
way
with
old-fashioned
just
append
that
prefix.
It
is
quite
easy,
but
you
need
to
remember
about
it
next
plans
for
our
current
milestones,
so
the
condition
announcement
we
want
to
be
compliant
with
offshore
guidelines.
You
want
to
clean
up
a
little
bit
our
structure
and
and
so
on,
a
pipeline
fin
UPS.
Basically,
we
already
did
a
lot
of
things
because
we
inhibit
end-to-end
testing.
C
C
What's
more,
the
newest
be
newest,
be
API
was
released
some
time
ago
and
we
want
to
be
compliant
with
that.
They
in
current
my
stone
and
next
thing.
Quite
quite
nice-
is
user,
provided
services
currently
we're
supporting
that
by
dedicated
broker,
but
in
current
milestone
we
want
to
implement
that
functionality
directly
in
the
service
catalog.
So
using
additional
broker
will
not
be
necessary,
and
the
last
thing
almost
the
last
feeling,
but
also
important
for
us-
is
to
migrate
service
network
resources
under
stick
control.
What
that
mean?
C
Basically,
we
have
our
page
under
SD
cat
io
and
we
are
not
owning
directly
that
DNS.
So
we
want
to
immigrate
that
to
something
more
connected
with
the
six
will
be
probably
with
under
new
domain
and
same
thing
for
downloading
our
as
we
cut
binaries.
So
you
need
to
figure
out
how
to
do
it
properly,
but
it
is
the
case
that
we
want
to
do
last
thing,
but
quite
week
is
to
decide
what
you
want
to
do
with
the
present
functionality.
C
It
wasn't
started
in
the
seek
sales
catalog,
but
yeah.
It's
need
to
contact,
seek
apps,
probably
and
also
seek
architecture
about
that
vision.
It
will
be
removed
or
somehow
supported
by
the
Trinities
itself.
We
also
have
a
new
sub
project
called
mini
broker.
Me
broker
basically
implements
the
USB
API,
and
it's
really
good
for
development
and
testing
local
development.
We
are
ready
using
that
in
our
work
for
documentation
previously
was
created
by
one
of
our
C
contributor
currents,
but
she
decided
to
donate
that
Burn
Notice.
So
thanks
currents
for
doing
that.
C
Second
sub
project
that
will
own
still
in
progress
is
the
broker
client
written
in
go
and
we
are
using
that
directly
sells
catwalk
to
communicating
the
brokers.
So
we
need
to
move
it
directly
on
out
under
kubernetes
6,
to
be
able
to
provide
additional
functionalities
and
develop
that
code.
It
was
started
by
pomorie
and
right
now
is
I
think
that
in
a
few
days
it
should
be
already
under
kubernetes
6
organization
and
one
more
time,
special
thanks
to
Nikita,
because
she's
running
the
whole
repository
migration
for
us
we
can
contribute.
C
Of
course
we
have
a
good
first
issues.
There
is
a
lot
of
them,
but
you
can
basically
assign
to
any
other
scientists
from
carry
my
stone.
You
can
just
contact
us
on
slag
or
on
our
seek
meeting
and
above
our
seek.
One
thing
was
that
we
change
the
time
from
1:00
p.m.
to
9:00
a.m.
Pacific
time,
so
if
it
doesn't
works
for
you,
just
contact
us
and
you
can
work
on
on
different
time,
probably
and
last
slide
about
current
chair.
So
me
and
gelatin,
our
homepage
slack
channel
and
Google
Group.
C
E
Yes,
so
everybody
in
case
you
haven't
seen
the
post
to
be
kubernetes
depthless.
It
is
the
election.
The
TLDR
it
starts
usually
are
on
August
and
will
execute
in
October
all
the
instructions
that
you
have
are
listed
on.
The
list
of
the
post
on
the
list
and
they'll
either
come
from
myself
mr.
Bobby
tables
or
Brian
grant
or
a
we
plan
on
doing
since
we
just
announced
the
election.
We
are
gonna
plan
to
give
a
quick
status
update
both
in
this
meeting
every
week
and
weekly
to
the
kubernetes
dev
list
itself.
E
If
case
you
are
confused,
the
kubernetes
dev
mailing
list
will
be
the
source
of
truth
for
all
the
information
concerning
the
election,
we're
currently
in
the
nomination
period.
So
just
a
quick
recap
of
those
rules,
you
can
nominate
yourself
and
you
need
two
plus
ones
from
people
who
are
different
affiliations
from
you
and
then
you
will
follow
the
instructions
and
basically
PR
a
little
a
little
markdown
file
into
the
election
directory.
E
A
Thank
you,
okay,
I,
so,
and
then,
with
that
we
will
wind
up
with
some
shoutouts
Aaron,
give
a
shout
out
to
Tim
pepper
for
taking
during
the
infra
meeting
and
Tim
pepper,
give
a
shout-out
to
Christoph,
Blocher,
Ben
and
I.
Don't
know
who's
Nick.
That
is
but
anyway,
three
members,
the
release
team
for
their
work
across
the
past
of
the
release.
Engineering
team
focused
past
five
days
to
get
the
going.