►
From YouTube: Kubernetes Community Meeting 20180517
Description
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
All
right
welcome
to
the
May
17
2008
teen
edition
of
our
weekly
kubernetes
community
gathering
on
this
on
these
meetings.
We
have
updates
from
SIG's
and
other
awesomeness
that
pursues
in
the
form
of
release,
notes
and
updates,
as
well
as
shutouts
I'm,
Paris
I
work
at
Google
and
first
things.
First,
we
typically
do
a
demo,
however.
I
think
that
they
were
unable
to
make
it
today,
unfortunately,
last
call
Raphael
or
Vasu,
or
you
on
the
line,
all
right.
They
are
not
on
the
line.
A
B
B
So
whole
bunch
of
people
came
together
to
give
us
relatively
clear
test
signal,
as
in
only
a
couple
of
blocker,
scalability
upgrade
has
still
failing
and
those
four
known
reasons
that
just
take
a
little
bit
longer
defects,
the
and
I
very
few
open
bugs.
So
we
feel
confident
in
both
releasing
the
beta
and,
more
importantly,
for
people
working
on
1:11
features
delaying
the
start
of
code
slush
and
code
freeze,
which
means
effectively
adding
one
more
week
of
development
and
cutting
the
week
from
the
code
freeze
period.
B
Hopefully
this
can
become
a
standard
in
future
release
cycles.
If
we
can
keep
our
tests
generally
passing
then
code
freeze
can
in
fact,
be
very
short,
maybe
even
get
it
down
to
a
week.
We
will
see
the
so
I
code.
Freeze
will
actually
start
on
May
29th
and
then
sorry
that
code
for
its
code
slush
will
start
at
May,
29th
and
then
code
freeze
will
start
on
June
5th
and
again,
everything
is
up
in
the
calendar
has
been
updated
and
everything
is
up
there.
B
So
I
thinks
a
lot
keep
working
on
111
and
keep
those
tests
passing
one
area
I
do
want
to
call
out
is
again
like
always.
If
we
have
anybody
in
here
who
is
a
performance
or
performance,
testing,
geek
sig
scalability
could
really
really
use
help
in
improving
the
performance
tests
so
that
we
have
faster
performance
feedback
to
contributors
and
thus
can
fix
performance
bugs
faster
and
with
less
effort.
A
A
A
D
C
Okay,
that's
that's
good,
because
I
can
now
explain
some
of
the
features
with
a
little
bit
more
detail,
Wow
so
for
111.
Essentially
there
we
have
several
features
that
you
would
like
to
move
to
beta
priority
and
preemption
is
London
was
notable
one.
As
most
people
know,
this
allows
us
to
specify
priority
of
Tod
and
when
a
cluster
is
under
resource
pressure,
pod
with
the
highest
priority
can
be
scheduled
at
the
cost
of
removing
some
of
those
lower
priority
plot
from
the
cluster.
C
One
of
the
biggest
blockers
for
this
feature
in
the
past
has
been
the
fact
that
we
wanted
to
get
some
actual
mileage
from
this
feature.
Luckily,
this
you
know
we
actually
asked
a
lot
of
people
in
the
community
as
well
as
some
of
our
users
in
gke
to
try
this
feature.
Some
people
tried,
but
we
were
not
able
to
get
some
useful
feedback
or
we
didn't
have
proper
channels
to
communicate
with
number
four.
C
C
C
So
there
is
no
point
in
reevaluating
this
predicate
again
and
again.
If
there
are
many
parts
waiting
for
true
cities,
that's
the
idea
behind
the
equivalence
class,
so,
basically
with
scheduler
caches,
the
results
wanted
to
evaluate
the
predicate
and
keeps
the
cash
as
long
as
the
condition
of
the
clusters.
From
the
point
of
view
of
schedule
remain
the
same
once
the
conditions
change,
the
scheduler
invalidates
the
equivalent
cash.
So
there
are
some
subtle.
C
It
is
involved
both
in
terms
of
I
know
of
race
conditions
in
validating
the
cash,
as
well
as
in
validating
the
cash
at
the
right
moment.
Also
finding
what
would
be
the
right
metric
to
determine
equivalency
on
earth,
so
there
were
some
subtleties,
involve
you're,
trying
to
hash
out
all
the
little
corner
cases,
and
hopefully
we
can
move
it
to
beta
in
111,
but
we
are
not
hundred
percent
sure.
Yet
this
is
another
item
anyway
there
there
is
a
big
effort
in
the
community
as
well
as
inside
school.
C
To
design
a
gang
scheduling
can
be
scheduling,
is
a
feature
required
by
various
workloads,
notably
by
machine
learning,
as
well
as
like
support.
So
the
idea
here
is
that
we
would
like
to
schedule
a
number
of
parts
together
if
a
certain
number
of
God
cannot
be
scheduled
all
that
at
the
same
time,
there
is
no
point
in
the
scheduling
is
smaller
set
of
them.
So
currently
it's
controlling
it
schedule.
C
One
part
at
a
time
you
would
like
to
be
able
to
make
some
changes
so
that
we
can
guarantee
that
either
the
whole
gang
can
be
schedules
or
we
don't
schedule
it.
This
is
something
still
in
the
design
phase.
Although
there
is
a
there
is
an
implementation
of
a
prototype
in
one
of
our
incubator
projects
cube
arbitrator.
C
That
implements
is
kind
of
a
simple
version
of
this,
which
basically
specifies
only
a
minimum
number
of
part
that
should
be
scheduled
together
and
it
works
reasonably
well
so
far
what
we
would
like
to
basically
collect
all
the
requirements.
For
example,
one
of
the
things
that
have
recently
come
up.
This
is
the
heterogeneity
of
the
en
so
some
of
the
pods
in
the
gang
may
have
different
images
or
quite
different
scheduling
or
different
scheduling
properties.
F
Yeah
there
has
been
talk
off
and
on
about
using
some
kind
of
like
I
know
it
I
know.
Gang
scheduling
has
a
very
specific
meaning
in
nhpc,
which
is
what
you've
just
described,
but
there's
been
some
discussion
about
using
a
gang,
scheduler,
ish
sort
of
approach
in
order
to
improve
schedule
or
throughput.
In
other
words,
just
do
like
lots
of
batches
of
scheduling
together
right
in
that
these,
as
related
as
like
one
might
be
a
step
in
that
direction
or
how
you
know
so.
C
F
G
C
We
are
not
you're
not
really
doing
much
in
terms
of
batching,
although
it
is
also
you
know,
as
you
know,
basically
planned
to
implement
that
as
well
at
some
point,
but
definitely
it's
not
the
first
stuff
for
getting
scheduling
edges.
However,
we
have
another
idea
of
improving
the
the
cupola
scheduler
by
looking
at
a
number
of
pods
one
part
at
a
time.
That's
basically
the
same
idea
behind
cerumen
scheduler
I
have
performance,
optimization
error
right,
yes,.
C
Effort
is
going
on,
but
basically
engineers
from
hallways
are
working
on
this.
They
have
run
into
some
challenges
for
technical
challenges,
to
support
all
of
the
kubernetes
api,
notably
pod
anti
affinity,
which
is
like
a
symmetric
and
has
called
the
quite
a
bit
of
headache
for
them
to
implement
it
properly.
When
you
are
considering
a
batch.
C
We
are
not
completely
oblivious
to
it,
but
we
don't
see
a
whole
lot
of
input
at
this
okay,
sir
Moe.
We
are
working
also
on
moving
a
couple.
Other
features
like
tank,
notify
condition
and
also
faced
eviction
to
beta
as
well.
Another
big
effort
is
to
fill
the
scheduling
framework.
That
is
a
still
in
design
phase.
We
are
moving
up
forward
as
well.
C
Party
scheduling
policies
is
another
big
effort
that
has
been
kind
of
blocked
by
very
different
opinions
that
everybody
in
the
community
has.
Some
people
believe
that
we
should
do
it
with
OPA.
Some
people
have
some
opinion
about
how
to
design
it.
90
80
I
need
one
way
or
the
other,
so
you're
not
seeing
a
whole
lot
of
improvement,
and
in
that
area
there
is
a
there's,
a
design
proposal
already
written,
but
as
I
said,
there
has
been
so
many
comments
and
every
person
has
a
different
opinion
about
it.
A
D
F
Somewhere
in
here,
there's
a
full
screen
but
I
think
I'll,
just
I'll
just
forge
ahead
without
it
so
I
was
I
was
invited
by
way
of
time.
Savings
to
recycle
could
come.
Some
could
come
content.
I'm
gonna
do
that
with
some
small
updates.
Hopefully
some
of
the
bits,
some
of
the
stuff
that
we
talked
about.
Akincana
you
be
of
interest
just
a
little
background.
F
So
if
you
want
to
take
a
look
at
them,
there
there's
also
a
link
to
the
YouTube
video
from
the
sink
intro
that
we
did
shout
out
to
Tom,
who
last
at
Austin
and
in
Copenhagen
as
well,
did
a
bang-up
job
of
running
the
session.
So
thanks
to
him,
I'll
go
through
this
pretty
quickly.
This
is
just
our
kind
of
standard
sake.
Update
like
this
is
the
stuff
of
work
on
here's
our
logistics.
This
these
haven't
changed
in
a
while
I'll
go
through
that
pretty
quickly.
F
F
Probably
the
interesting
bits
here
are
what
the
schedule,
what
the
schedule
is
for
the
big
runs
so
there's
a
even
odd
day
schedule
on
the
5k
performance
and
correctness
runs
so
there
you
go
so
I'll
skip
about
the
I'll
skipper
with
us
here
for
a
second
I.
Think
one
of
the
things
that
we've
found
helpful,
especially,
is
that
in
the
user
community
there
is
a
fair
bit
of
obsession
with
nodes
number
of
nodes
that
are
supported,
supported.
We
found
this
pretty
commonly,
so
one
of
the
things
we
try
to
do
is
educate
the
user
community.
F
The
number
of
nodes
is
not
the
only
thing
that
there's
a
bunch
of
different
performance
axes
here.
These
are
not
the
only
ones,
but
I
think
these
are
the
ones
we
feel
like
are
the
ones
that
we
see
issues
around
and
so
we're
trying
to
introduce
this
concept
of
the
performance
envelope
and
how,
if
you,
you
know,
stretch
your
performance
in
one
direction.
It's
going
to
shrink
the
performance
in
another.
F
One
of
the
things
that
I
think
we
should
do
is
actually
explore
the
space
a
little
bit
more,
so
that
we're
able
to
give
our
user
community
a
bit
more
guidance
about
if
they
stretch
one
of
these
axes
in
one
direction.
How
far
what
actual
impact
is
that
has
so.
This
has
been
a
useful
way
to
try
to
get
users
up
to
speed
on
hey,
there's,
there's
more
than
one
thing
that
matters
and
then
I
will
go
through
these
here.
F
I
think
the
group
here
is
seeing
them,
but
one
of
the
things
we
like
to
do
is
just
kind
of
show
people
what
kind
of
publicly
available
resources
there
are
for
them
to
look
at
some
of
you
can
explore
this
on
your
own.
Perhaps
afterwards
again,
we
find
a
lot
of
like
big,
more
beginner
users
like
to
show
up
and
hear
about
what
the
like,
what
are
the
pro
tips.
F
Also,
there's
not
very
good
awareness
of
Kubb
mark
outside
the
developer
community
as
a
possible
resource
for
how
to
do
cluster
qualification
and
testing
a
little
bit
more
cheaply
and
efficiently.
It's
great
for
pipelines,
okay!
So
a
little
bit
more
here,
we
kind
of
skipped
over
this
in
coop
con,
but
I'll
highlight
it
here,
which
is
there's
some
good
reading
here
if
you're
interested
as
Josh
mentioned,
if
there's
security,
yet
sorry
a
scalability
geeks
here
that
want
to
look
at
this,
there
are
some
guidelines
that
are
probably
good
for
everybody.
F
Take
a
look
at
those
the
performance
regression
study
that
Shawn
did
is
I.
Think
a
fascinating
read
for
anyone
who
has
even
the
slightest
interest
in
this.
So
take
a
look
at
that
as
well.
That's
a
good
bit
of
work.
That's
a
good
piece
of
work
if
you're
interested
in
deeper
dives
on
the
SL
O's
have
at
it
and
then
kind
of
work
in
progress,
stuff,
I
think
two
bits
here
that
are
worth
a
mention
one
is
that
there's
more
work
going
into
use
real
workload,
testing
and
I've
put
the
pull
request
here.
F
F
F
B
F
Other
others
may
have,
others
may
have
a
different
perspective
here.
I
would
not
say
that
performance
is
bottlenecked
on
at
CD.
Overall,
it
kind
of
depends
again
on
how
you're
running,
but
my
point,
the
point
I
was
trying
to
make
was
that
we
have
seen
performance
regressions
with
regards
to
different
versions
of
that
CD.
F
So
we're
just
earning
caution
to
not
just
install
the
latest
version
of
that
CD
point
your
cluster
at
it
and
expect
that
things
are
gonna
work,
great,
there's
a
lot
of
testing,
that's
done,
and
if
you
go
back
through
and
look
at
the
PR
history,
you'll
find
you
know
more
than
one
case
where
we
had
to
like
revert
on
a
net
CD
version,
because
there
was
some
performance
regression
that
happened.
So
my
comments
were
really
more
about
regressions
and
stability
than
they
were
about.
You
know
performance
issues
per
se.
A
F
A
E
A
E
Yeah
big
update,
let's
see
we're
doing
stuff.
Others
was
some
things
that
we
have
done
recently.
David
made
a
new
dynamic
client
as
a
streamlined
interface.
The
old
dynamic
client
will
be
available
in
the
client
library
under
thee,
like
deprecated,
dynamic,
client
directory,
so
story
online
yes
ago.
Is
it
go
quiet?
This
is
the
this.
Is
the
unstructured
one
that
you'd
use
for
a
CRT
or
a
type?
You
don't
know
anything
about
yeah,
so
that's
coming.
One
thing
to
know
about
that
is
that
the
client-side
QPS
rate
limit
behavior
has
changed.
E
So
if
that
sounds
like
something
that
you
might
care
about,
you
can
go
read
the
beast
out
what
else
we're
still
working
on
putting
a
the
first
several
choices
for
CRT
versioning
and
we
discovered
a
sort
of
a
late-breaking
discovery
to
are
designed
for
versioning
priorities,
but
I
think
we
can
still
get
the
no
op
conversion
in
before
111.
So
look
for
that
I
think
those
are
the
two
major
things
there
was
something
else.
E
I
was
going
to
go
and
and
I'll
give
an
update
for
the
apply
working
group,
something
in
here
which
is
we're
using
a
feature
branch.
We've
made
some
changes
for
trying
to
put
everything
in
master
that
can
go
on
master
like
we
do,
refactoring
that
that
of
an
investor,
we're
not
going
to
reintegrate
before
the
111,
so
we'll
keep
working
through
the
code.
Freeze
because
we're
in
our
own
feature,
branch
and
I
should
say
that
our
use
of
a
feature
branch
is
somewhat
experimental.
E
A
Any
questions
for
API
machinery
or
the
apply
working
group
or
any
of
the
other
SIG's
that
presented
today,
alright,
so
we
actually
are
flip-flopping
the
agenda
and
the
demo
is
on
I,
see
Vasu
and
Rafael
right
now,
thanks
for
joining
us
y'all,
so
you
have
ten
minutes
and
if
you
want
to
go
ahead
and
share
your
screen,
we
can
get.
You
might
want
to
adjust
your
terminal
view
to
make
it
bigger.
If
you
have
any
terminal
stuff
going
on.
K
K
The
kinetise
botanist
for
reasons
which
will
become
evident
later,
has
a
mission
statement
which
was
kind
of
derived
from
the
needs
of
larger
enterprises
like
sa
P,
where
we
need
to
operate
really
many
many
kubernetes
clusters
across
all
of
the
various
public
cloud
providers,
but
also
on
private
clouds
right
so
I'm
out,
that's
definitely
a
mouthful,
but
before
we
enter
into
the
demo,
we
want
to
give
you
a
quick
overview
of
darkness
architecture
and
it
set
up
I.
Think
in
this
round.
K
The
fact
Gardner
is
completely
open
source
for
us
and
it's
mostly
written
in
though,
and
we
ensured
that
it's
a
first-class
citizen
of
kubernetes
itself,
so
that
we
don't
actually
need
other
control
tools,
tooling
around
hey
so
now
to
the
infrastructure
setup
itself.
The
first
thing
you
need
to
do
is
to
meet
strap
an
initial
central
cluster,
which
we
call
back
the
garden
cluster.
We
use
an
internal
tool
called
purify
and
for
this
purpose,
which
will
also
open
source.
K
So
once
you
have
the
initial
bonitas
boy,
the
dashboard
and
and
the
gardener
components,
the
gardener
is
implemented
as
an
extension
API
server.
Next
we
need
a
seat
cluster
on
the
target.
Infrastructure
now
know
that
we
do
not
need
access
to
the
target
infrastructure
as
a
service
API.
We
only
need
to
connect
to
the
kubernetes
cluster
API
of
that
seed
cluster.
This
enables
us
to
piggyback
and
and
get
access
into
private
cloud
and
on-premise
setups.
K
We
deploy
the
control
complaint
components
in
the
namespace
and
make
use
of
the
isolation
features
which
are
available
there.
The
first
thing
we
of
course
deploy
is
an
Exedy
with
the
backup,
sidecar
and
then
obviously,
all
the
other
control
plan
components
and
with
the
load
balancer
active,
you
almost
have
a
working
control
plane
in
our
setup.
We
always
give
every
control
plane
a
dedicated
crew,
mattias
monitoring
and
logging
setup,
as
well
now
to
the
end-user
cluster
itself.
Right.
K
So
because
we
see
the
control
planes
in
the
seed
cluster
and
now
it's
ready
to
sprout,
we
call
it
the
chute
cluster
and
for
accounting
and
isolation
purposes,
it's
typically
located
in
a
different
account
on
the
under
target
infrastructure,
and
we
first
execute
a
terraform
job
with
inner
access
to
the
IaaS
api
and
prepare
or
just
verify
that
the
network
is
set
up
active.
Then
we
let
another
component
or
machine
controller
materialize.
The
nodes,
the
machine
controller,
lets
you
order
virtual
machines
via
kubernetes
custom
resources.
K
So
it's
a
primary
citizen
of
communities
itself
and
we
also
connected
the
autoscaler
directly
with
the
machine,
controller
and
I
think
the
rest
is
pretty
clear.
Maybe
the
only
thing
notable
is
that
we,
because
we
have
an
air
gap
in
between,
we
need
a
VPN
sidecar
such
that
the
API
and
monitoring
get
access
to
the
nodes.
So
I
think
now
it's
desperate
time
for
demo.
So
Raphael
you
go
ahead
on
the
scale
I
yeah,
so.
L
Hi
everyone
I
would
try
to
give
you
a
quick
overview
for
that
only
looks
in
action,
so
I'm
now
logged
into
a
garden
and
expert
in
one
of
our
tech
systems
and
I'm
targeting
the
CN
CF
project,
which
I've
waited
here
for
the
demo
purpose.
I
have
pre,
created
three
kubernetes
clusters
to
on
AWS
one
mgcp
and
at
a
glance
I
can
see
the
most
relevant
information
for
all
of
these
clusters
safe
and
sound
at
this.
One
here
has
issue,
because
the
crop
of
adequate
engines
are
incorrect.
This
one
is
healthy.
L
L
L
What
I
want
to
tell
you
now
is
threatened
is
greater
features
of
that
dashboard,
so
I,
quickly
switch
to
terminal,
and
this
year
have
pre-configured
my
user
account
to
be
an
administrator
by
just
modifying
a
customer
and
in
the
Dom
custom
and
I
regret.
I'm
able
to
see
all
that
projects
managed
by
the
status
system
in
this
landscape,
but
I
can
also
clear
down
into
the
projects
and
see
all
the
clusters
which
are
migrated
there
on
the
different
target
infrastructures.
They
have
and
get
detailed
information
here.
L
You
can
have
access
to
the
clusters,
which
are
issues
where
I
can
filter
by
them,
which
we
have
problems
as
an
operator.
Basically,
all
you
care
most
about
them,
and
I
only
also
have
positivity
to
journaling
and
ticketing
system,
where
operators
or
what
ministers
can
pin
certain
messages
to
that
process.
L
Okay,
let's
take
a
more
technical
view
and
I'm
here
in
the
garden
cluster
and
to
be
run
basically,
the
gardening
space
they
get
a
server
extension.
The
controller
miniature
and
the
despot
is
normal
to
the
native
spots
and
the
project
that
you
just
saw
see
represented
as
an
instance
where
Kenyans
regular
keep
control
commands
as
its
tends
to
be
the
neatest
API
to
just
based
on
the
trust.
Mr.
L
Gardner
created
secrets
in
that
namespace
for
the
coupon
page
and
SSH
keys,
so
that
the
user
can
conveniently
download
the
credentials
and
access
information,
and
what
we
know
is
a
given
to
do
is
to
create
a
cluster
via
the
desktop
and
show
what
happens
actually
in
their
various
the
sea
truth
and
batten
fastness.
So
for
that
I
prepare
here,
watch
one
and
also
in
this
plane.
I
have
to
prepare
the
watch
for
the
cee-trust.
L
Well,
you
already
see
there
is
a
namespace
for
the
cluster
tree,
which
already
exists
there
now
creating
via
that
export
new
cluster.
That's
for
the
demo
and
see
the
texture.
L
F
L
The
technical
representation
of
that
cluster,
when
it's
basically
like
a
normal
to
the
natives,
manifest
in
extremity
API
about
the
students
sources
or
the
cluster
specific
information
like
a
promise
network
computer
age,
which
this
and
so
on
and
manage
to
respect
section
and
the
status
information
the
dialog
box.
You
know
the
last
was
able,
at
the
rate
of
the
health
checks,
the
conditions
just
like
for
parts
and
it
be
concise,
two
clusters,
great
every
ten
minutes,
so
here
in
that
seed
cluster.
L
Let's
take
a
look
at
what's
running
there
now,
as
was
explained,
so
that's
not
a
complete
control.
Plane
managed
need
to
sit
for
an
end
to
the
cluster.
The
HCD
is
a
stateful
set.
We
have
our
regular
the
bonitas
components
running
a
sports
fear.
We
have
some
monitoring
components
that
the
nucleus,
with
the
alert
manager-
and
he
also
have
permission
from
for
the
miniature
so
assert.
It
extends
the
coordinators
API
with
custom
in
sources
between
resources
which
are
residing
in
actual
virtual
machines,
Lydia,
so
young
and
targeting
the
seed
cluster.
L
A
K
M
A
question
too
I
was
wondering
if
you
plan
on
any
bare
metal
type
support.
Okay,.
K
So
when
you
look
at
our
architecture
diagram,
we
more
or
less
piggyback
on
infrastructure
layer
which
is
provided
by
a
partner
like
so,
for
example,
or
other
partner
companies
and
more
or
less
they
offer
us
an
infrastructure
API
which
gives
us
access
to
bare
metal.
We
can.
We
can
really
piggyback
on
that,
so
that's
kind
of
the
division
of
work.
Where
we're
from
the
second
floor.
That's.
K
Oh
yeah
so
we're
this
is
an
interesting
question,
we're
actually
talking
to
our
colleagues
or
partner
with
VMware,
and
we
could
actually
do
it
ourselves
indeed,
but
we
also
want
to
have
support
in
that,
so
we're
actually
waiting
for
our
VMware
colleagues
to
jump
in
and
do
that
for
us
right-
and
this
is
an
open-source
project.
I
think-
is
this
kind
of
our.
We
are
at
the
moment
kind
of
consuming
a
lot
from
from
the
kubernetes
community
thanks
a
lot
for
that.
We're
learning
a
lot,
and
this
is
kind
of
our
way
to
get
back.
K
A
Awesome
thanks
again
y'all
for
coming
on
all
right.
Now
we
are
on
to
the
announcements
part
of
our
day.
First
shout
outs:
a
couple
warm
welcome
messages
to
Liz
and
Chuck
hop
for
their
journey
and
joining
the
kubernetes
organization.
They
both
have
been
having
a
big
impact
in
sync
cluster
lifecycle
and
that's
her
stealthy
box.
Welcome
to
those
it
looks
like
chant,
Z&D
anderson
for
a
great
conversation
on
bare
metal
options
and
concerns,
and
in
some
slack
channels
this
week
also
quick
shout
out
to
to
lick
it
master
Wrangler
of
end
to
end
bug
test.
A
Jordan
has
fixed
many
fun
bugs
looks
like
there's
even
the
the
link
to
the
bug
in
there
for
folks
that
want
to
check
that
out
and
keeping
things
green.
That
was
from
then
the
elder
and
then
team
Cole
says
as
a
new
contributor.
They
can
100%
endorse
Carolyn
for
being
really
good
at
bringing
on
new
contributors
I'm
guessing.
This
is
for
sig
service
catalog
and
dedicating
a
lot
of
time
and
effort
to
make
sure
that
they're
successful,
which
is
an
awesome
shout
out.
So
thanks
Carolyn,
and
then
we
also
have
some
Help
Wanted
items
today.
A
Cig
UI
is
looking
for
new
contributors
to
go
to
the
ladder
to
eventually
become
maintainer,
x'
start
with
an
open
issue
first
and
then
also
feel
free
to
reach
out
on
the
mailing
list
and
slack
channel.
They
do
have
some
current
maintainer
x'
that
are
very
interested
in
mentoring,
others.
So
if
JavaScript's
your
thing
and
you
like
to
dashboard,
please
feel
free
to
reach
out
to
say
qi,
and
we
just
heard
that
say
that
stake
scalability
is
also
looking
for
contributors
reach
out
through
their
various
channels
and
open
issues.
A
A
If
contributor
mentoring
sounds
like
your
thing,
and
you
want
to
check
out
all
the
cool
programs
that
we
have
feel
free
to
fill
out
that
form
our
next
mentoring
initiative
is
meet
our
contributors,
which
happens
on
the
first
Wednesday
of
every
month,
so
June
6th
is
our
next
event.
For
that
and
that's
a
YouTube
livestream
event
where
it's
mentors
on
demand,
you
can
ask
questions
to
contributors.
Questions
like
why
is
my
test
failing?
How
do
I
become
an
approver
and
everything
in
between
so
join
us
for
that,
and
then
cube
con
follow-ups?
A
The
video
and
slides
are
up
and
thanks
to
cloud
yuga
for
putting
that
together
in
a
github
repo
of
all
the
videos
and
slides,
that's
pretty
awesome!
So
it's
a
comprehensive
list
and
then
some
quick
other
comments,
don't
forget
to
check
out,
discuss
kubernetes
IO,
it's
a
new
communication
platform
that
we
are
testing.
This
is
an
open
source
tool
and
it's
very
similar
to
a
forum
or
a
message
board,
although
please
post
messages
there
so
that
we
can
have
some
public
discussions
and
then
last
but
not
least,
we
are
going
to
do
a
contributor.
A
Ask
me
anything
during
the
community
day
at
docker
con,
which
is
coming
up
June
13th
in
San
Francisco.
If
any
contributors
are
going
to
dr.
Khan
and
would
like
to
pop
in
and
help
us
answer,
questions
feel
free.
This
is
an
unstructured
three
hour
window,
just
taking
questions
from
folks
that
a
docker
call
and
they'll
walk
in.
Does
anyone
else
have
anything
to
add
whether
it's
shout
outs,
Help
Wanted
or
any
other
announcements
that
they'd
like
the
floor
with.