►
From YouTube: Kubernetes Community Meeting 20180301
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
Notes: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
A
B
Let
me
just
share
my
screen
yeah,
so
I
wanted
to
talk
a
little
bit
about
the
tool,
son,
a
boy
that
we've
been
working
on
at
eff.
Do
the
the
high
level
sentence?
Is
it's
a
framework
for
extracting
cluster
and
workload
information
and
to
make
that
a
little
bit
more
concrete,
I
wanted
to
go
over
a
few
use
cases?
B
You
can
submit
a
PR
to
the
CNC
F
and
if
you
pass
it
all
the
tests
pass,
you
can
put
this
little
badge
on
your
kubernetes
installer
so
that
everybody
knows
that
your
installer
is
certified.
Another
thing
is:
it's
sometimes
nice
just
to
be
able
to
run
the
conformance
tests,
and
so
this
is
actually
just
an
easy
way
to.
B
If
you
spin
up
a
cluster,
you
can
run
your
conformance
tests,
see
what's
working,
what's
not
working
and
figure
out
how
to
improve
it
and
security,
something
everybody
cares
about,
there's
a
plugin
that
was
not
written
by
either
anybody
at
FTO
or
aqua
security,
but
it's
a
Sun
and
Boyd
plugin
that
runs
cube
bench,
which
is
a
set
of
security
checks
for
your
cluster.
So
you
may
care
about
one
of
these
use
cases
and
how
it
works.
I
wanted
to
do
a
quick
level
overview
of
what's
going
on
into
the
hood.
B
There's
three
pieces
to
care
about
here
is
the
configuration
which
defines
the
plugins
to
run.
You
have
to
find
the
plugins
he
wants
to
run
and
you
define
the
metadata
craze
not
to
run
so.
You
can
collect
information
about
your
cluster,
such
as
what
pods
are
running.
What
interfaces
exist?
All
sorts
of
different
resource
gathering
and
you
can
also
specify
which
plugins
to
run
then
the
to
the
to
actual
pieces
of
infrastructure
that
get
created
are
the
aggregator
which
is
responsible
for
running
the
worker
pods.
B
The
plan
running
the
worker
pods
and
collecting
the
metadata
from
the
cluster
itself
and
then
the
plugin
worker
is
the
the
thing
that
will
be
actually
running.
The
plugin
container,
that's
going
to
be
running,
say
the
end-to-end
tests
or
the
system
D
gathering
container
and
and
sending
the
responses
back
to
the
aggregator.
B
Once
the
plug-in
has
finished
finished
running
and
the
aggregator
is
the
thing
that
will
be
running
the
queries
for
your
metadata,
so
it'll
it'll
create
the
kubernetes
api
and
gather
all
of
the
gather,
all
of
the
resources
that
you
ask
it
together
and
that's
all
configurable
in
the
configuration
file,
so
plugins
kind
of
interesting
how
they
work.
We've
sort
of
broken
plug-ins
down
into
two
different
types
of
plugins:
there's
no
dependent
plugins
and
no
two
independent
plugins.
So
for
the
node
dependent
plugins.
It
matters
where
it
matters
which
node
your
plug-in
is
running
on.
B
So
we
use
a
job
driver
for
that
and
once
your
plugins
finish
running
sauna
boy
provides
you
a
tarball
of
all
of
the
outputs
you'll
get
all
of
the
results
of
all
of
your
plugins
and
you'll
get
the
results
of
all
of
your
queries
in
a
big
tar,
ball
of
JSON
and
XML,
or
whatever
format
your
plugins
are
generating,
and
it's
kind
of
unwieldy
to
look
at
not
super
pretty.
So
we
released
a
tool
a
little
while
ago
called
scanner
which
is
a
web
interface
to
sauna
boy.
And
it's
nice.
B
It's
a
nice
way
to
get
started
because
he
gives
you
a
predefined
configuration.
So
you
don't
have
to
worry
about
configuring.
It
yourself
and
it's
gonna
collect
just
enough
information
to
populate
these.
The
UI
that
you'll
see
at
the
end
and
once
it
finishes
running
you'll,
actually
get
to
see
a
list
of
all
the
tests
that
run
which
one's
failed,
get
bubbled
up
to
the
top,
and
you
can
see
what
what
what
version
of
kubernetes
your
cluster
is
running,
how
many
nodes
you're
running
on
and
other
such
information.
B
Unfortunately,
it's
not
easily
customizable,
it
is
customizable,
but
it's
not
super
easy
to
do.
The
data
gets
deleted
after
90
days,
which
may
or
may
not
be
a
pro
or
con,
and
you
can't
really
run
it
from
an
air-gapped
environment
because
it
needs
because
kubernetes
would
need
to
talk
to
the
scanner
service
and
I
was
wantin
to
do
a
quick
walkthrough
of
that
with
screenshots.
So
this
is
the
main
landing
page
for
scanner
scanner
da
hefty
comm.
B
You
click
on
scan
your
cluster,
you
get
to
this
page
and
you
can
copy
and
paste
that
cube
control
apply,
URL
that
everybody
likes
and
enable
or
disable
our
back
depending
on
your
cluster
settings,
and
once
it
finishes
running
you'll,
see
this
page,
which
shows
you.
The
results
of
your
conformance
tests
you
can
see
here
is
125
that
got
run
they'll
pass
them
all.
B
If
there
were,
if
there
was
a
failure,
you
would
see
you
would
see
the
failures
bubbled
up
to
the
top,
and
you
can
download
the
test
report
here
as
well
to
see
if
it
with
with
enough
and
that'll,
give
you
the
files.
You
need
to
pass
the
CNCs
kubernetes
certification
program,
so
we've
been
working
for
a
long
time
on
getting
the
next
release
out
and
I
wanted
to
highlight
a
couple
of
the
features
that
we'll
be
releasing
in
the
next
month.
B
B
We've
made
the
plug-in
definitions
easier
to
write
so
hopefully
it'll
lower
the
barrier
to
entry
for
folks
who
want
to
write
plugins
and
just
lots
of
bug
fixes
oh
and
the
end
ago,
client
as
well,
so
you
can
programmatically
interface
with
the
libraries
if
you
like,
so
with
that
I
want
to
move
on
to
the
demo.
Let's
just
show
my
other
screen.
B
Ok,
so
I
have
a
new
cluster
that
I've
spun
up
and
we've
got
this
help
here.
So
we're
just
going
to
do
a
quick
run,
so
you
can
say
mode
equals
quick
and
that
will
go
and
create
a
bunch
of
objects.
So
you
can
see
it
creates
the
namespace
that
it
runs
in
all
the
are
back
stuff
that
it
needs
to
run
the
plugins
and
things
like
that
config
map.
So
this
is
the
configuration
and
then
the
actual
aggregator
pod
and
the
service
to
communicate.
B
B
If
you
would
like
to
get
involved,
you
can
check
out
our
issues
board,
there's
a
link
to
the
slides
in
the
community
meeting
notes,
as
well
as
the
Santa
boy,
repo
ping
us
on
sauna
boy
channel
and
the
kubernetes
slack.
Please
we
would
love
testers
for
our
CLI,
because
it's
pretty
new
and
they're
definitely
edge
cases
that
we
want
to
iron
out
before
the
release
and
we
love
feedback
this.
This
one
major
theme
of
this
whole
release
is
that
we've
based
it
on
the
feedback
we've
been
hearing
from
folks
in
the
community.
A
C
Thank
you
so
much
so
we
are
in
week
9
of
12
of
the
release
schedule
and
if
you
go
to
the
meeting
minutes
for
the
communion
meeting,
there
is
a
link
to
the
full
schedule.
This
week
we
began
code
freeze,
we
did
a
24
hour
postponement
due
to
the
fact
that
we
had
a
problem
that
made
that
this
would
be
cibecue,
not
work
at
optimal
rate.
So
we
had
a
lot
of
things
back
up
and
one
of
the
interesting
things
that
we
learned
from
having
that
24
hour
delay
was
that
it
it
made.
C
It
made
it
easier
to
to
sort
of
get
things
tidied
up
and
ready
for
for
code.
Freeze,
so
I
don't
know
if
we
really
learned
anything
from
that,
but
it
certainly
might
be
something
that
we
look
at
how
how
long
we
we
make
for
the
code
is
transition
for
so
I
look
a
little
bit
about
why
we
do
code
freeze,
because
some
of
the
parts
of
the
release
process
are
not
necessarily
totally
obvious
code.
C
Freeze
provides
an
opportunity
for
SIG's
to
focus
on
bugs
and
clean
up
and
because
the
merge
queue
is
only
handling
things
related
to
the
the
release.
At
that
time,
writing
new
feature,
work
and
stuff
sort
of
it
doesn't
have
as
much
value
at
that
time,
because
you
can't
really
merge
it
and
do
that.
So
this
gives
SIG's
a
chance
to
clean
up.
Do
bug
fixes
other
things
like
that.
It
also
allows
time
for
technical
debt
elimination.
C
So
if
you,
if
you
were
getting
25-minute
plus
unit
test
run
times
before,
you
should
be
down
to
five
or
six
minutes
now,
switch
is
a
very
big
improvement
and
think
you've
been
for
that
and
all
the
folks
in
cig
testing
for
the
work
you're
doing
so
that
the
mechanisms
of
code
freeze
essentially,
is
that
we
use
labels
to
determine
what
is
relevant
to
the
release
itself.
The
status
approved
from
milestone
label
is
the
primary
way
we
do
that.
C
The
keepers
of
that
are
primarily
six,
although
in
some
cases
the
release
team
can
edit,
for
example,
after
code
freeze,
I
had
to
go
through
the
the
4850,
odd
approved
and
LG
teamed
PRS
that
were
not
labeled
that
way
in
order
to
get
them
to
merge.
So
there
are
cases
where,
as
release
leader,
I
have
to
go.
C
Do
that
or
other
members
of
the
team
will
do
it,
but
for
the
most
part,
if
yours
in
a
cig-
and
you
want
to
make
sure
that
something
is
getting
the
attention
of
the
release
team
and
making
it
into
the
release
itself.
That
status
approved
for
milestone
label
is
the
key
to
add
there
and
to
make
sure
that
it's
in
the
one
ten
milestone
and
if
that
label
is
set.
You
also
get
some
nagging
from
the
Baader
bout.
C
What
labels
you
need
to
make
are
also
assigned
so
things
that
are
in
the
the
release,
milestone
that
are
in
issues
and
our
proof
for
milestone
or
consider
release
blocking
and
so
that
that
gives
cigs
like
I
had
mentioned
a
couple
of
meetings
ago,
that
sort
of
proverbial
stop
chain
to
pull
on
a
release.
If
there's
something
that
you
feel
it's
going
to
negatively
impact
our
user
community,
and
that
is
that
is
definitely.
Everybody
is
right
and
duty
to
do
if
you
feel
like
there's
something
that
could
negatively
impact
the
release.
C
So
a
beta
went
out
late
last
night,
Thank
You
Caleb
miles
for
doing
that
work,
there's
a
link
to
the
release
in
the
meeting
minutes
as
well.
If
you
are
able
to
please
check
out
that
beta,
do
some
work
with
it.
If
you
run
into
issues,
definitely
file
those
and
give
us
visibility.
Because
that's
one
of
our
early
signal
points,
the
more
people
would
get
to
use
the
beta
the
better
off.
C
We
are
there's
a
link
in
the
meeting
minutes
to
the
known
issues
aggregator,
and
so
this
allows
us
if
you're
a
cig,
and
you
know
that
something
is
going
to
be
going
out
of
the
release
that
has
some
impact,
but
it's
in
the
will
not
fix
category
for
whatever
reason
or
its,
maybe
a
deprecation
that
has
implications
for
existing
versions
or
whatever.
That
looks
like
this
issue
is
how
we
track
that
stuff
and
make
sure
that
it
gets
into
the
release
notes.
C
So
if
you
know
if
something
like
that
or
suspect,
there's
something
like
that
go
ahead
and
put
it
in
there
just
in
case,
so
we
can
keep
an
eye
on
it.
Lastly,
release
notes
and
user-facing
documentation
should
be
close
to
complete
PRS
for
docs.
Then
that's
user
facing
Doc's
need
to
be
ready
for
review
by
tomorrow.
So
usually,
if
we
have
a
deadline,
Chris,
it's
a
6
p.m.
Pacific.
We
try
and
give
a
full
workday
on
deadline
base.
C
So
if
it's
different
than
that,
I'll
definitely
let
people
know,
but
for
the
most
part
plan
on
6
p.m.
Pacific
time
to
have
those
ready
for
review
and
then,
if
we
want
to
get
a
jump
on
it
this
weekend,
we
can
start
looking
at
the
that's
pretty
much
it
for
the
110
releases.
There
any
is
there
any
questions.
C
Okay,
I
put
the
next
two
updates
in
for
the
the
point
releases
that
are
coming
out.
189
should
be
out
today
and
one
713
is
out.
There
was
a
little
bit
of
I,
don't
know
difficulty
with
the
release
process
for
one
713
for
some
reason,
and
the
version
in
the
information
that
actually
went
into
the
kubernetes
release
version
in
github
doesn't
have
the
full
information
that
way.
Tech
is
working
on
sorting
that
out
and
once
everything
to
know
that
he
saw
on
top
of
that.
So
we'll
we'll
have
more
updates
on
that.
D
C
D
D
This
is
a
new
graph
that
Lukas
put
together
because
going
into
code
slash,
we
were
concerned
with
what
seemed
like
a
high
number
of
open
issues,
and
so
I
wanted
to
compare
whether
the
number
of
open
issues
was
actually
high
compared
with
previous
releases,
and
it
turns
out
that
we
had
most
of
the
information
that
we
needed
in
the
graph
database
with
the
caveat
that
I'll
explain
in
just
a
minute
so
that
we
actually
could
compare
them.
So
if
you
want
to
click
that
first
link,
the
the
110
link.
D
It
turns
out
that
if
you
manually
change
a
milestone
like
you
just
remove
a
milestone
from
an
issue
that
does
not
generate
an
event,
so
we
don't
actually
know
that
the
issue
has
been
taken
out
of
the
milestone
for
the
graph
until
some
other
event
happens
to
the
issue,
such
as
it
being
closed
or
a
comment
being
at
it.
The
will
open
a
bug
with
github
about
this,
but
it
does
mean
that
the
council
I'm
about
to
show
you
are
a
little
high
because
they're
effectively
trailing
reality
by
a
few
days.
D
D
Okay,
and
so
this
was
our
last
normal
cycle
release
and
in
here
you
see,
the
counts
of
both
PRS
and
issues
are
actually
much
higher
than
they
are
for.
D
A
F
From
Big
Data,
so
in
general,
the
theme
of
parsing
is
to
work
on
projects
which
are
usually
external
and
kind
of
bring
the
right
combination
of
kubernetes
knowledge
and
knowledge
of
those
external
same
words
which
are
commonly
used
for
big
data
computation
in
individual
ecosystem.
So
three
projects
that
we've
been
working
on,
our
apache
spark
airflow
hdfs,
so
there's
stuff
happening
on
each
them.
F
Patty
part,
just
least
two
point
three
yesterday
and
that
and
that
that
contained
all
of
the
stuff
that
we
had
worked
on
for
the
past
year,
which
was
initially
in
a
four
and
it
has
now
been
upstream
and
yes,
Paris
I've,
actually
been
helping
us
out
with
Thomas
transitioning
from
the
kubernetes
governance
model
in
our
sort
to
the
Apache
was
just
like.
If
it
doesn't
happen
in
mailing
list,
it
just
didn't
happen
at
all,
but
different.
F
Yeah
and
in
the
process
we
added
three
new
EHS
committees
from
are
sick,
who
will
also
be
spark
emitters
and
in
relation
to
that
we're
working
on
like
a
spark
submit
operator
which
is
bringing
in
a
kubernetes
semantics
and
communities
like
the
ice
point
of
a
spark
itself
in
other
products.
We
also
have
a
group
of
people
that
are
working
on
airflow
airflow.
Is
this
really
popular
drag
scheduler?
That's
used
for
composing
data
processing
in
the
air
transformation
pipeline
airflow.
F
The
effort
have
actually
been
partially
up
screamed
at
this
point
you
can
in
the
next
video
also
I,
think
that's
do
another
one
March.
You
can
actually
pick
it
up
and
launch
pods
as
part
of
your
workflow
and
pretty
nothing
secure
anything
in
those
steps,
but
also
working
on
like
a
kubernetes
executive,
which
is
going
to
bring
let
airflow
done
entirely
on
per
minute
and
not
need
like
additional
cluster
manager
or
a
nose
for,
and
so
there's
blog
posts
on
this
coming
up
and
finally,
on
HDFS
there's
a
lot
of
hardening
of
existing
work.
F
That's
going
on.
This
is
like
adding
highly
available
name,
nodes
and
fault
coverings,
and
things
like
that.
It's
a
success
for
this
project
looks
like
making
it
performant
insecure
and
making
HDFS
on
containers
just
as
good
as
it
is
this
a
yarn,
traditional
Hadoop
stack.
Yes,
so
there's
a
demo
for
that.
I
see
gaps
having
some
okay.
G
H
H
So
for
1.10,
sig
storage
has
been
working
on
moving
a
lot
of
features
from
alpha
to
beta,
specifically
local
storage
API,
which
can
be
used
to
access.
New
local
storage
has
been
moved
to
beta.
The
CSI
core
API,
which
was
introduced
about
the
last
quarter,
has
also
been
moved
to
beta.
In
doing
so,
we
had
the
ability
to
allow
you
to
control
your
filesystem
type
by
other
CSI
parameters
and
allow
you
to
set
share
secrets
with
CSI
drivers.
This
version
apps
to
CSI,
release,
0.2
and
there's
more
details
in
that
link.
H
There
were
some
features
that
we
were
hoping
to
get
in
to
beta
that
weren't
able
to
get
in
we're
going
to
continue
to
work
on
those
next
quarter
associated
with
CSI
is
now
propagation.
It's
a
feature
that
allows
privileged
containers
to
be
able
to
set
up.
Bidirectional
are
shared.
This
is
important
for
volume,
external
volume,
drivers
that
get
deployed
as
containers
so
that
the
amounts
that
they
create
inside
the
container
are
actually
propagated
to
the
host
machine.
So
we
move
that
feature
to
beta
as
well.
H
Ephemeral,
storage
request
limit
API
has
moved
to
beta.
This
allows
you
to
set
limits
on
how
much
space
your
container
can
use
from
the
local
machine
for
things
like
container
logs
overlay
FS,
the
images
things
like
that
and
then
finally,
we've
added
the
ability
to
prevent
volume
objects
from
being
deleted
out
of
order.
So,
for
example,
if
you
have
a
bounty
be
PVC,
we
added
finalizar
x'
to
the
PVC,
so
it
blocks
deletion
of
the
PVC
object
until
the
pv
is
deleted
and
similarly,
the
PVC
won't
be
deleted
until
the
pods
that
are
referencing.
H
It
are
deleted.
In
addition
to
moving
a
bunch
of
features
to
beta
we've
been
working
on,
designing
new
features,
the
biggest
feature
that
we're
looking
at
right
now
is
coming
up
with
a
topology
aware
of
volume
scheduling.
This
will
allow
the
scheduler
to
actually
influence
where
volumes
are
provisioned
today.
H
What's
next
for
storage
sake,
so
before
1.10
is
cut,
it's
going
to
be
focused
on
testing
testing,
testing,
bug,
fixing
and
documentation,
and
then
in
the
next
quarter
in
the
next
couple
quarters
for
111
112
we're
going
to
start
driving
the
data
features
towards
GA
and
stable.
We
want
to
make
sure
that
we
don't
get
stuck
in
beta
indefinitely.
H
We're
also
planning
the
series
of
videos
to
help
onboard
new
contributors
within
the
sig.
That
kind
of
leads
the
next
slide,
which
is
how
do
you
get
involved?
We
have
bi-weekly
meetings,
there's
a
link
here
and
a
link
to
the
notes
by
weekly
means
every
two
weeks.
There's
a
five
channel.
You
can
jump
in
and
ask
for
help
on
we're
going
to
be
at
cue,
Connie.
H
You
and
presenting
an
intro
to
the
storage
sig
session
there
if
you're
going
to
be
at
UConn
and
finally,
we
we
hold
face-to-face
meetings
regularly,
every
one
or
two
quarters
to
have
intensive
design
discussions
and
close
on
some
of
the
more
contentious
designs.
We're
planning
our
next
one
for
somewhere
around
April,
join
the
meetings
to
get
involved
with
that
and
that's
all
from
storage
sagen.
I
I
First,
I'll
address
the
Federation
code,
which
was
moved
out
of
tree
during
1/9
I
mentioned
it's
become
done,
updated
a
little
bit.
The
net
effect
of
that
was
the
cute
fed
binary,
which
is
used
to
bootstrap
the
Federation
control
plane
mysteriously
disappeared
from
1/9,
so
I
caught
some
user
confusion
to
be
essentially
the
documentation
on
how
to
start
a
federated
control.
Plane
did
not
work
so
there
are.
There
is
ongoing
work
within
the
repo
to
be
able
to
rebuild
it
out
of
core,
but,
to
be
honest,
I
think
the
next
steps
are
somewhat
unclear.
I
The
where
Federation
is
that
has
moved
to
a
separate
working
group
called
Federation
v2,
so
I
think
I'd
want
to
bring
that
up
with
maybe
sig
p.m.
as
to
where
that
leads.
Our
users
or
the
community
site
tried
to
clarify
where
the
effort
is
going.
But,
for
example,
if
you
go
to
the
top-level
documentation
page,
it
would
be
nice
to
at
least
have
a
note
that
says
what
the
status
of
the
project
is
just
to
be
fair
to
our
users.
Here.
I
I
I
So,
essentially,
what
that
means
is
before
in
the
Federation,
be
one
per
specific
types
overrides
on
types
were
buried
and
annotations
and
right
now
the
effort
is
to
try
to
formalize
that
through
templates
potentially
series,
the
two
other
projects
that
are
worth
mentioning
for
the
update
one
is
called
cluster
registry.
It's
in
a
standalone
repository
and
right
now
it
maintains
lists
of
cube
API
servers
which,
for
the
community.
I
Clusters,
but
I
guess
I
should
mention
that
we've
had
other
projects
like
SDO
reach
out
and
consume
this
API
to
be
able
to
use
the
cluster
registry
to
do
SEO,
multi
control
planes.
So
it's
actually
their
version
of
multi
cluster
for
is
deal.
So
that's
why
I
guess
I'm
specifying
maybe
their
list
of
cube
API
servers,
because
if
you
uses
queue,
API
servers
and
not
necessarily
kubernetes
clusters,
the
goal
for
that
project
is
get
to
agreements
on
the
API
to
be
able
to
go
to
beta
this
quarter.
I
The
last
ongoing
project
within
the
state
called
to
NCI
MCI
stands
for
a
multi
cluster
ingress
can
have
two
sub
projects.
One
is
the
first
of
the
command
line,
based
tools
to
be
able
to
configure
multi
clustering
method.
So
these
would
be
in
England
that
management
and
multiple
clusters
and
the
idea
there
is
to
hook
it
up
to
the
cluster
registry.
So
the
other
project
to
be
able
to
do
label
based
cluster
selection
to
be
to
be
able
to
deploy
ingress
is
across
multiple
clusters.
I
H
Yeah
I.
Think
of
that
so
background
is
that
we
have
this
maintainer
role
on
the
ladder
and
it's
always
had
to
do
figure
out
like
if
you
need
this
and
why
it
exists,
and
it
primarily
existed,
because
there
are
a
couple
things
that
you
couldn't
do
with
get
comments
that
you
needed
right.
Access
to
the
community
before
through
automation,
we've
closed
those
gaps,
and
so
the
direct
terminus
right
access
is
not
required
for
most
maintenance
day-to-day.
In
addition,
the
project
is
grown
in
scope.
H
H
So
the
proposal
here
is
to
take
those
responsibilities
that
currently
are,
and
this
maintainer
rung
up
the
ladder
and
to
fold
them
into
owners
which
are
scoped
to
specific
areas
of
the
project
asana.
So
if
you,
if
you're,
not,
if
you
don't
have
write
access
and
you
weren't
planning
on
getting
it,
this
doesn't
affect
you
at
all.
It
just
actually
is
a
win
for
you,
because
now
you
can
probably
do
things
that
were
harder
to
do
before,
and
you
have
a
better
direct
amount
to
ownership
of
some
areas
of
the
project.
A
Cool,
that's
it's
always
good
to
be
able
to
do
more
stuff
via
the
bot
commands
and
not
need
to
go
bug.
Someone
as
higher-level
access,
also
a
reminder
to
people.
If
you
want
to
see
what
the
bot
commands
are
and
how
to
use
them.
There's
a
nice
web
page
on
the
kubernetes
website.
If
you
go
to
prowl,
kate's,
dot,
io
and
click
on
the
menu
there's
the
command
like
this
all
right.
Next
up,
we
have
some
reminders
about
the
kubernetes
contributor
summit
and
yukon.
E
Sure
reminder
that
the
contributor
summit
is
happening
happening
a
day
before
cube
con
Copenhagen,
which
is
May
1st
the
registration
for
that
is
now
open.
There
is
a
link
in
the
dock.
The
registration
this
year
is
actually
on
the
cube
card
website.
As
a
co-located
event.
It
is
a
Google
form,
it
is
free.
You
do
not
have
to
attend
to
con
for
that
to
go.
We
go.
Have
multiple
tracks
there
this
year,
one
track
aimed
at
new
contributors,
the
other
track
aid
that
current
contributors
and
then
OH.
E
There
is
a
cute
calm
price
increase
as
of
March
nice
and
the
ticket
link
is
inside
of
there.
I'll
also
grab
the
next
one
too
Solly.
If
that's
okay,
we
are
having
our
next
meet
our
contributor
series.
We
had
our
first
one
last
month
and
we'll
do
it
again
every
when
every
first
Wednesday
of
every
month,
we'd
like
to
do
with
more,
but
that's
what
we're
starting
with
right.
Now.
E
This
is
an
ask
us
anything
session,
very
similar
to
use
our
office
hours,
but
with
contributors
on
the
line,
we
do
have
two
times
those
to
make
that
two
times
to
make
that
friendly
to
different
time
zones.
3:30
p.m.
and
9:00
p.m.
UTC
check
that
out
for
your
tribes,
own
conversion
ask
questions
on
the
meet
our
contributors
slack
channel,
and
we
would
also
like
to
open
up
to
live
peer
code
reviews.
E
So
if
you
have
a
code
review
that
you
would
like
to
be
reviewed
by
up
here,
please
get
that
in
at
least
24
hours
before.
So
we
can
make
sure
that
we
have
someone
that
is
keen
on
that
area.
That
will
help
you
with
that.
We'll
also
probably
do
a
live
Docs
review
as
well.
So
you
can
see
how
the
docs
team
goes
through,
and
the
tech
review.
E
A
Awesome
all
right-
and
finally,
we've
got
some
shout
outs,
so
this
section
for
people
who
haven't
heard
it
yet
is
where
we
give
kudos
to
people
who
are
particularly
helpful
in
the
project.
You
can
nominate
people
in
the
shout
outs
channel
on
slack,
so
there
were
no
nominations
this
week,
but
I
am
going
to
go
over
the
top
5
responders
in
the
kubernetes
slack
overflow
tag,
because
seriously
you
guys
are
awesome,
helping
helping
answer
people's
questions
about
how
to
work
with
kubernetes
very
important.
So
we
have
Radek
arrow
kite,
bikram,
Jonah
and
Hyun
Jin
ho.