►
From YouTube: Kubernetes Community Meeting 20181115
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more information.
A
All
right
welcome
everybody.
It
is
November
15.
This
is
the
weekly
kubernetes
community
meeting.
This
is
a
public
meeting
that
we
have
every
week
where
we
share
release,
updates
six
minutes
updates
from
across
the
project
and
kind
of
generally
just
sync
up
on
what
everybody
is
doing.
Today
we
are
going
to
do
a
slight
change
than
usual.
A
We're
gonna
have
a
six
badass
update,
first
from
say,
Service
Catalog,
and
then
we're
going
to
go
in
a
demo
where
Alex
is
gonna
demo,
halloumi
I
hope
I
pronounced
that
right,
then
we'll
go
into
release
updates
a
tip
of
the
week,
and
then
we
will
finish
with
sig
IBM
cloud
during
their
status
update
and
then
some
announcements
around
cube
con
and
the
usual
bits
around
the
schedule
or
on
the
holiday.
Please
remember
that
this
video
is,
or
this
meeting
is
being
live-streamed
YouTube
and
recorded
and
put
into
a
youtube
archive
and
playlist.
A
B
The
Service
Catalog
is
responsible
for
trying
to
bridge
the
gap
between,
when
you
have
say
like
SAS
like
service
or
like
software
service,
like
my
sequel
database
is
maybe
the
right
of
a
dke
or
key
vault,
or
there
all
sorts
of
like
various
things
your
cloud
providers
have,
and
they
expose
it
through
things
for
the
open
service,
Berger,
API
and
Service.
Catalog
guys
want
to
talk
to
that
and
then
allow
you
to
manage
all
of
these
cloud
resources
directly
here
kubernetes.
B
Instead
of
having
to
jump
back
and
forth
between
different
tooling
such
as
quick,
you
know
what
the
heck
are
we,
so
what
we
did
last
like
well,
this
was
for
one
that
twelve
we
put
out
its
kind
of
interesting,
so
this
one
broker
installed
for
the
whole
cluster.
That
understands
how
to
talk
to
say
a
juror
or
understands
how
to
talk
to
you
to
do
a
store,
something
like
that
and
because
there's
one
thing
oftentimes,
it
needs
to
make
authentication
decisions
based
on
who
actually
made
one
of
these
kubernetes
resources.
B
So,
let's,
finally,
in
all
the
way
through,
so
that
brokers
can
say
that
sure
Sally
can
do
something.
But
Bob
can't
because
oftentimes
like
kubernetes,
our
back
is
the
wrong
place
for
us
to
make
decisions,
actually
we're
having
a
decisions
happening
inside
the
copywriter
itself.
So
that's
filling
in
another
neat
feature:
is
the
ability
to
have
a
burger
that
isn't
cluster
wide
so
that
you
could
have
a
broker?
That's
customized
and
tuned,
maybe
for
a
bunch
of
devs
inside
of
one
namespace
or
it
kind
of
just
various
like
oftentimes
I.
B
Think
about
these
different
environments
have
different
needs,
but
it
could
be.
Different.
Teams
need
to
be
built
separately
in
both
their
different
subscriptions
and
things
like
that.
They
can
be
broken
out
by
namespaces
and
be
able
to
talk
to
cloud
providers
and
create
things
and
different
resource
groups
or
regions,
and
things
like
that,
so
that's
pretty
much
completely
and
now,
which
is
cool
and
API
slam
is
g8.
B
B
So
that's
in
there
as
well,
and
we
started
laying
the
groundwork
for
a
new
feature.
Called
service
plan
defaults
again.
The
idea
is
that
we're
trying
to
remove
the
friction
for
using
these
catalogs
and
these
services,
because
oftentimes,
it's
like
I,
don't
really
know
how
to
provision
something
appropriately
on
gke
Lakes
manner
or
how
to
get
RDS
provision
appropriately,
moderate,
like
IP
settings
and
networking,
and
things
like
that.
B
So
that's
actually
really
awesome
thing
as
well,
because
if
you
are
a
new
person
to
either
kubernetes
or
go,
we
always
have
open
issues
for
people
to
implement
things
in
our
skill
I,
and
it's
really
helpful.
If
you're
trying
to
get
involved
in
kubernetes
because
they're
always
accessible,
you
don't
need
to
understand
a
ton
and
don't
forget
me
and
help
with
RS
you
can
our
CLI
issues
so
for
our
next
home
accompanying
cycle.
B
I
don't
know
if
this
was
announced
at
the
last
community
meeting,
but
J
Boyd
is
now
a
sig
chair
and
Paul
more
each
step
down
from
red
hat,
so
both
were
kind
of
do
a
little
switcheroo
and
J
help
pushed
through
namespace
serviceworkers.
It's
been
super
active
and
then
otherwise,
no
I
said
we're
working
on
our
CLI.
B
Actually,
so
we've
kind
of
come
along
for
the
ride
and
we're
to
stay
with
everyone
love
all
the
work
that's
happening,
but
I
think
we're
still
not
to
the
point
where
we
can
switch
over
to
see
our
DS,
which
is
super
sad,
but
we're
always
hoping
we're
always
evaluating,
and
then
so,
along
that
same
line
because
series
of
Comus
farms
they
have
we're
starting
to
see
the
model
for
service
catalogs
show
its
age
a
little
bit.
So
your
DS
are
really
awesome
and
you'd
like
to
take
advantage
of
them.
B
So
we're
working
with
cig
apps
to
find
a
way
to
use
CRTs
with
Service
Catalog
and
with
other
things
in
general,
it
may
be
an
operator.
Instead,
it
doesn't
have
to
be
Service
Catalog
and
people.
They
have
a
C
or
D
just
as
kind
my
sequel
kind,
some
GMS
thing
or
a
key
value
store
stuff
like
that
anything
that
could
be
an
underlying
service
that
you're,
like
cloud,
provides.
B
So
that's
why
we're
working
with
sig
apps
and
trying
to
find
more
people
to
get
involved
and
make
this
an
effort
outside
of
six
Service
Catalog.
So
this
is
something
you're
kind
of
interested
in
being
able
to
write
at
once
and
move
it
around.
We're
not
open
like
two
days
ago,
yeah
and
then,
like
I,
said
otherwise
we're
always
looking
for
people
contribute.
We
have
some
new
chairs,
but
we
definitely
had
a
lot
of
turnover
with
reviewers
and
members.
B
So
if
you
would
like
to
get
Nettie's
and
you're
not
familiar
with
Cooper
now
you're
not
familiar
with
go,
welcome
everybody
all
skill
levels
and
you
do
help
mentor
and
get
people
involved,
even
if
you're
not
like,
if
you're
new,
to
say,
open
source,
we've
helped
people
get
involved
as
well.
There
we
have
a
milestone
right
now,
they're
working
really
fast
towards
leading
to
the
CLI.
So
every
issue
is
good
for
new
people
and
we
do
have
other
ones
that
have
a
ton
of
really
good
information.
B
That
explain
like
this
is
the
code
you
need
to
change.
This
is
how
you
need
to
change
it.
Here's
all
the
contacts
like
we
have
issues
that
will
walk
you
through
everything
you
need
to
do
to
get
started,
so
we
really
do
want
more
people
to
to
join
us
and
hang
out
with
us
and
yeah.
You
can
find
us
on
slack
or
flex
way
more
active
in
the
mailing
list,
to
be
honest
and
having
cute
little
webpage.
That
explains
what
Service
Catalog
is,
is
usually
a
little
big
any
questions
and.
A
A
D
Okay,
can
you
all
see
my
screen?
Okay,
cool,
so
I'm,
Alex,
I'm
gonna
be
talking
about
in
demoing
halloumi.
Today,
Pulu
me,
I'm,
just
gonna
start
with
a
couple
of
slides
that
motivate
everything
and
then
gonna
jump
right
into
the
code.
So
Pulu
me
is
an
open
source
tool
chain
for
managing
cloud
infrastructure
with
code
before
I
explain
what
that
means
to
us
I'm
going
to
kind
of
present
like
a
motivating
example.
D
D
The
way
that
you
would
normally
do
this
is
somebody
may
be
the
storage
infrastructure
team
would
provision
the
actual
database,
maybe
with
terraform
in
order
to
get
that
and
reference
that
in
the
helm
chart
you
would
have
to
like
parse
the
connection
string
you'd
have
to
like
get
the
connection
string
after
it's
initialized
parse.
It
interpolate
it
into
a
kubernetes
secret.
Somehow,
maybe
using
go
templates
and
then
reference
that
in
the
helm
chart
maybe
using
bash,
and
so
the
core
question
that
I
that
I
have
is.
D
Is
there
a
way
to
have
a
more
principled
workflow
around
stuff
like
this,
which
I
believe
is
super
common,
but
that
doesn't
sacrifice
the
declarative
nature
of
the
kubernetes
api?
Okay,
so
at
plumie
our
I
don't
have
time
to
like
explain
all
of
the
things
that
it
does,
but
at
a
high
level.
For
this
example,
the
relevant
the
relevant
information
is
that
we
gloomy
exposes
an
open
source
sdk.
That
sdk
exposes
a
programming
model.
That
is
declarative
in
the
sense
that
you
declare
a
steady
state
that
you
would
like.
D
The
system
to
drive
towards
the
steady
state
declaration
is
built
using
real
programming
languages,
so
not
via
cells
like
json,
it
or-
or
you
know,
gin
like
ginger
or
anything
like
that.
So
we
support
typescript,
JavaScript
and
Python,
and
each
SDK
covers
all
major
public
clouds.
So
what
I
want
to
argue
is
that
each
of
these
things
is
really
useful
for
workflows
like
this
and
that
we
can
have
the
advantages
of
all
of
them.
But
also
keep
many
of
the
advantages
of
having
a
declarative
API
in
kubernetes,
okay.
D
So
basically
the
the
code
demo
is
going
to
there's
going
to
be
three
components
to
the
code
demo.
So
the
first
component
is,
we
have
a
JavaScript
program
that
declares
the
steady
state
we
want.
The
steady
state
we
want
is
to
create
a
cosmos,
DB
instance,
which
is
like
a
jurors
flavored
document
store.
We
want
to
like
parson
reference,
that's
the
connection
string
of
that
database
in
a
kubernetes
secret,
and
then
we
want
to
reference
that
in
the
Hound
chart.
D
So
that's
the
steady
state
that
we
are
going
to
declare
we're
going
to
use
a
program
to
build
this
steady
state
and
register
it
with
glue
me.
The
plumie
engine
will
create
a
plan
and
then,
when
we
ask
it
to
it
will
execute
that
plan
and
will
execute
the
set
of
operations
required
to
enact
that
plan.
Okay,
so
okay,
if
that
doesn't
make
sense,
just
like
interrupt
me
with
like
questions
oops
nope.
So
the
first
thing
that
we'll
look
at
we'll
just
look
at
some
code
really
briefly
before
we
look
at
the
provisioning
experience.
D
So
in
this
this
example,
there
are
basically
three
main
parts:
okay,
so
the
first
part
is
that
we
are
declaring,
with
this
bit
of
code
here,
the
cosmos
DB
instance,
so
we're
we're
declaring
the
steady
state
that
we
want
for
cosmos
to
be
so
it's
like.
We
want
flavored.
You
know
we
want
some
consistency
policy.
We
want
us,
you
know
like
some
sort
of
failover
requirements,
so
this
is
basically
this
is
at
this
point.
When
we
execute
this
constructor,
it
will
register
with
the
Polli
Anjan
like
this.
D
It
will
say:
I
want
this
steady
state
and
then
plumie,
and
then,
after
that
we
will.
We
will
declare
this
kubernetes
secret,
and
this
is
where
things
start
to
get
a
little
bit
interesting.
So
you
can
see
that
we're
just
using
like
we're
calling
Kate
stock
or
dot
v1
dot
secret.
The
schema
here
is
precisely
the
kubernetes
schema:
we're
not
editorializing
or
creating
a
new
API.
D
It's
literally
what
you
would
put
inside
a
kubernetes
secret,
but
with
one
twist,
which
is
that
we
have
a
function
called
parse
connection
string
in
a
package
called
helpers
and
we're
referencing.
You
can
see
what
should
be
a
computed
value
on
the
cosmos
database,
so
so
this
is-
and
this
is
like
a
declarative
specification
of
of
what
we
want-
the
cosmos
DB
to
look
like
and
an
exported
value
on.
That
is
connection
strings.
So
we
can.
We
can
take
that
connection
string,
parse
it
with
this
and
then
put
it
into
the
kubernetes
secret
data.
D
It
will
put
like
a
placeholder
value
in
here
and
then,
when
we
actually
run
the
plan
like
it
will
take
the
connection
string
plug
it
in
when,
when
it
registers
this
resource,
allowing
us
to
parse
it
and
put
it
inside
the
data.
So
there
is
a
dependency
here.
It
knows
to
to
provision
the
cosmos
DB
instance
first
and
then
the
kubernetes
secret,
the
last
fit,
is
declaring
a
steady
sign,
steady
state
for
a
hum
chart.
D
What
we're
going
to
use
here
is
like
bitNami,
x'
node
like
default,
node
application,
we're
going
to
say
we
don't
want
MongoDB
to
be
provisioned
in
cluster.
Instead,
we
want
to
use
this
external
DB
and
we're
going
to
reference
the
we're
going
to
reference
the
name
of
the
connection
string
secret.
So
that's
how
Polly
me
knows
that
this
chart
takes
a
dependency
on
this
connection
string.
So
if
I
move
to
the
terminal
and
I
do
well,
I'll
do
clear.
First
and
I
do
Pulu
me
up,
it
will
show
me
a
first.
D
It
will
show
me
a
preview
like
a
plan
of
the
set
of
stuff
that
it
expects
a
set
of
operations
it
expects
to
do.
You
can
see
that
I've
already
provisioned
the
MongoDB
instance
and
I've
added
the
secret
in
the
chart
here,
so
that
only
those
are
created
because
it
takes
a
while
to
provision
the
database.
If
I
select
details
here
and
scroll
into
this,
we
can
see
that
these
are
just
normal
kubernetes
objects.
D
So
since
the
database
has
already
been
provisioned,
we
can
see
that
the
parse
connection
string
has
populated
the
data
with
the
connection
string
information
for
the
kubernetes
database.
It
has
allocated
a
name
to
the
secret
and
the
secret
is
referenced
inside
the
charts
values.
What
would
be
two
values
camel?
Then
we
see
that
we
have
just
a
normal
kubernetes
service
and
a
normal
deployment
which
references
that
secret.
So
if
I,
if
I
select
yes,
what
will
happen
next?
Is
it
will
start
it
will
do
those
operations
in
a
topologically
sound
ordering.
D
So
it
knows
we
need
to
do
the
secret
first.
Now
it's
starting
to
create
the
chart
and
you'll
see
that
we
have
a
fine-grained
set
of
the
process
of
provisioning.
The
resources
is
split
into
like
very
fine-grained
status
messages,
so
we
can
see
that
the
deployment
initialization
is
is
rolling
out.
We
can
see
when
the
replica
sets
are
updating.
We
can
see
that
because
this
is
a
service
of
type
load,
balancer
we're
waiting
to
allocate
the
IP
address
to
the
service.
This
can
take
significant
amount
of
time.
D
The
public
load,
balancer
IP
address
that's
allocated
to
the
service
once
that
is
finished,
provisioning
I
can
use
a
command
to
retrieve
that
that
IP
address.
You
can
see
here
programmatically.
So
it's
like
you
know
this
IP
address
here.
If
I
do
blue
me,
stack
output,
front,
end
address
and
then
I
I
pipe
that
to
the
clipboard
and
then
I
go
over
to
my
browser
and
I
click.
This
I
can
see
that
I
have
like
a
to-do
application.
D
I
can
put
stuff
in
here
if
I
take
that
and
I
open
that
in
a
new
browser
you
know
like
all
this
stuff
is
persistent,
so
I
can
acknowledge
that
and
then
yeah,
ok,
so
I
think
I
have
30
seconds
left.
So
one
thing
that
I
do
want
to
point
out
is
that
this
is
this.
Is
it
does
have
a
strong
notion
of
like
update,
so
if
I
change,
the
data
object
here
and
I
just
put
some
random
stuff
in
there?
D
One
of
the
things
that's
interesting
about
our
model
is
that
Pulu
me
is,
is
intelligent
enough
to
to
know
that
this
should
be
triggering
a
update
in
the
deployments
that
reference
it.
So
when
I
run
pulling
me
up,
we
can
say
we
can
see
that
we're
actually
going
to
replace
the
secret
and
then
that
will
trigger
a
rollout
and
the
deployment
that
references
it.
If
we,
if
we
look
in
the
details,
we
can
see
the
actual
order
of
operations
that
happens
here
will
create
a
replacement
with
the
new
connection
string,
which
is
just
nonsense.
A
E
D
So
the
if
so,
basically
what
we're
do
if
I
understand
your
question
correctly,
we're
generating
like
all
versions
of
all
API
objects
as
part
of
the
surface
like
the
API
service
and
then
where
you
do
like
whether
it's
a
successful
deployment
depends
on
like
which
API
server
you
deploy
it
to.
So,
if
you
deploy
like
like
v1
deployment
to
like
a
really
old
version
of
kubernetes,
like
that
won't
work,
is
that
what
you're,
asking
or.
A
C
Cool
1.13,
we
are
almost
nearing
the
homestretch
code.
Freeze
is
tomorrow
5:00
p.m.
PST,
so
the
last
two
days
of
active
development
there's
a
lot
of
enhancements
that
got
merged
yesterday
and
a
bunch
are
due
to
go
in
today.
There
are
a
handful
of
plant
113
items
that
are
at
risk
and
super
tight
to
get
in
the
release.
Team
is
working
closely
with
the
owners
to
see
to
assess
the
risk
and
move
them
to
114
where
possible,
but
but
as
of
5:00
p.m.
C
C
So
far,
CI
signal
looks
good
on
master,
but
that
said,
we
are
seeing
few
failures
as
of
this
morning
as
a
bunch
of
pairs
much,
but
we
will
be
tracking
those
and
provided
things
are
stable.
We
still
plan
to
cut
our
beta
after
code
freeze
tomorrow,
Master
will
reopen
for
a
114
on
11
28
and
the
dates
are
updated
in
the
sig
release
calendar
as
well.
We
do
have
a
few
test
failures.
C
/
flakes
that
I've
listed
that
could
potentially
be
beta
or
release
blockers
depending
on
if
they
show
up
just
a
few
days
before
the
release.
So
the
CI
signal
team
is
actively
working
with
the
sakes
to
address
those
flakes
so
again,
as
you
shall
please
consider
those
as
priority
and
help
us
deflate
them
in
a
couple
in
the
next
two
days,
if
possible,
the
final
call-out
is
for
release
themes.
We
do
have
a
draft
of
the
release,
themes
that
were
sent
out
by
my
to
all
the
sig
leads
on
Monday.
C
We
want
your
help
in
filling
out
for
the
major
themes
and
also
any
known
issues
that
you
want
to
call
out
in
the
release.
There's
a
github
issue:
that's
also
listed
in
the
release
notes.
So
if
you
would
like,
if
you
could
leave
a
comment
there
regarding
any
known
issue,
we'll
make
sure
that
it's
included
in
the
facial
notes.
C
Finally,
as
promised
last
week,
we
have
a
issue
for
drafting
the
114
release
team
thanks
to
Steven
for
scaring
the
ball
rolling
yeah
for
those
who
is
interested
in
volunteering,
/
signing
up
other
folks
that
you
know
who
would
would
love
to
be
part
of
the
release
team.
Please
join
that
issue
and
comment
away.
There
are
a
few
links
there
where
we
are
trying
to
refine
the
selection
process
a
little
bit
this
time
around.
So
there's
a
there,
a
couple
of
PRS
there.
C
C
A
A
E
E
A
All
right
any
questions
about
si
esta
Katie,
oh
I,
don't
know
if
DIMMs
is
on,
but
thank
you
for
maintaining
this
he's
doing
that
out
of
the
kindness
of
his
heart
all
right
and
with
that
just
a
quick
reminder
that
we
do
have
a
contributor
cheat
sheet.
I've
added
the
link
in
the
notes
that
basically
links
a
bunch
of
useful
URLs
and
little
properties
across
the
kubernetes
namespace.
That
might
be
useful
for
you
throughout
your
day-to-day
work,
so
check
that
out
all
right.
Moving
on
our
Richard
cig,
IBM
cloud.
F
All
right
so
yeah,
thanks
for
letting
us
do
an
update
here
on
si
IBM
cloud
last
cycle.
We
really
focused
on
just
the
start
of
activities
for
our
sig
charter
and
so
on,
and
our
meetings
were
mainly
around
demos
and
presentations
on
kubernetes
IBM
cloud,
and
then
we
did
start
attending
these
two
cloud
provider
meetings
and
that
leads
us
into
our
plans
for
the
upcoming
cycle.
Is
that
we're
starting
to
work
towards
moving
our
cloud
provider
which
right
now
lives
behind
the
IBM
firewall
upstream
and
in
making
it
part
of
the
sig
club
butter
ecosystem?
F
So
we
started
our
cloud
provider
back
in
kubernetes
one
five
days
and
at
that
time,
and
since
then
we
haven't
had
the
ability
to
put
our
cloud
provider
in
entry
that
I
was
frozen,
so
we
kind
of
monitored
where
the
community
was
going
with.
That
and
I
think
moved
from
entry
to
auditory
providers
and
it
has
come
starting
to
come
together
and
plan
is
being
put
in
place.
So
we
are
working
internally
right
now
to
start
that
process
on
take
us
to
upstream
our
code.
Repo.
For
that,
so
that's
gonna,
be
our
main
focus.
F
Will
probably
not
get
it
done,
this
cycle
looking
towards
114
to
hopefully
close
that
out
and
then
in
the
meantime,
we'll
continue
our
conformance
test
clean
while
reporting
and
all
these
other
things
related
to
running
kubernetes
and
IBM
cloud,
and
if
you
do
need
to
get
all
of
us,
the
chairs
are
listed
here.
We
have
an
overview
slack
channel
and
mailing
list
mostly
available
on
slack
is
the
easiest
way
to
get
a
hold
of
us,
and
that
is
all
short
and
sweet.
Any
questions
I'll
take
them
out.
Thank
you.
A
A
Great
turnout,
lots
of
great
pics
on
Twitter
we'll
try
to
make
sure
that
we're
publishing
lessons
learned
in
the
notes
and
things
like
that
as
those
people
return
from
their
trip
for
Seattle,
Paris
I,
don't
know
if
you
want
to
say
anything
or
I'll,
just
repeat
the
usual
chairs
and
owners.
If
you
haven't
confirmed
with
community
at
kubernetes
io
q
con
us
is
now
completely
sold
out.
A
So
if
you
have
a
ticket
or
anything
you're,
just
gonna
be
automatically
wait-listed,
but
if
you're
a
chair
or
a
tech
lead
or
something
you
should
have
been
in
contact
with
us
already.
If
not
it's
number
too
late,
just
reach
out
to
us,
and
we
will
see
what
we
can
do.
A
community
meeting
scheduled
for
this
meeting.
Just
real
quick
I
just
want
to
go
over
it
again
for
the
holidays.
We
will
be
having
a
meeting
next
week
hosted
by
E,
so
those
of
you
that
are
not
in
the
u.s.
A
celebrating
that
holiday
can
feel
free
to
participate.
If
you
so
desire,
we're
still
going
to
try
to
do
the
release.
Retro
on
December
6th
December
13th
is
going
to
be
Q,
connoisseur,
no
community
meeting
at
all
and
then
for
the
rest
of
the
year.
We
just
won't
have
community
meetings
as
people
go
on
holidays,
January
is
gonna,
be
sig,
AFSA,
Qi
and
SiC
VMware
will
be
due
to
do
their
updates.
The
Syriac
committee
would
like
to
say
they're
not
having
a
meeting
next
week.
They'll
have
one
just
before
keep
con
SIG's.
E
We've
divided
up
the
SIG's
amongst
the
steering
committee
members
and
we've
all
been
pinging
pinging,
the
SIG's
I
pinged
about
half
a
dozen
SIG's.
Recently
API
machinery
has
one
underway,
there's
still
a
handful
of
cigs
left.
It
should
be
super
easy
now,
most
SIG's
fit
a
pretty
standard.
Mold
the
template
is
pretty
short
and
small.
There's
only
maybe
one
or
two
decisions
to
make.
If
you
have
questions
reach
out
to
the
steering
committee,
should
be
pretty
painless.
E
A
A
Just
a
reminder:
office
kubernetes
office
hours
is
next
week
on
YouTube
click
through
the
link.
If
you
haven't
been
paying
attention,
that's
our
live
stream,
where
we
answer
user
questions
for
users
live
on
the
air.
So,
if
you're
looking
for
a
change
of
pace,
I
want
to
sit
in
on
one
of
the
office
hours
and
lend
your
expertise.
We
are
always
looking
for
more
volunteers.
We
have
a
session
for
Europe
and
one
for
the
Americas.
Just
click
through
the
link.
A
Ok,
real
quick!
Let's
just
go
through
the
shout
outs
this
week.
What
this
is
it's
a
channel
on
slack?
If
you
see
someone
going
above
and
beyond
the
call
of
duty
feel
free
to
give
them
a
shout
out
there
and
every
week
we
go
through
and
ensure
that
that
person
gets
some
recognition
from
the
community.
A
So
Parris
would
like
to
shout
out
to
Josh
burkas
and
the
entire
queue
country
hi
new
contributor
workshop
team
Josh
about
the
team
and
carried
out
the
event
planned
for
this
first
time
so
that
event
in
a
new
market
to
welcome
contributors
from
this
region.
The
event
is
in
a
few
hours
from
this
time
stamp
or
I.
Think
it
just
happened.
I'm
not
sure
I
haven't,
checked
Twitter,
yet
best
of
luck
and
have
a
great
time,
Josh
elect
or
respond
that.
Thank
you.
A
Let's
add
all
the
names
that's
at
tea,
pepper,
booyah,
Noah,
Abrams,
Zhang
ping,
Zhao
at
ideal,
hack,
Megan,
Len
and
Jerry
Zhang,
thanks.
So
much
for
that.
Looking
forward
to
seeing
those
pictures,
Neal
lit,
one-two-three
would
like
to
shout
out
to
Fabrizio
panini
for
organizing
the
transition
of
phases
and
cube
admin
to
GA,
and
also
thank
you
to
all
the
new
Cuban
contributors
who
helped
us
with
this
work
at
Iago
did
I
get
that
right,
hopefully
at
you,
ease,
ow
Wang
at
Aires
Libre
and
at
rohit.
A
Spit
x
would
like
to
shout
out
to
shinobis
zachary
sarah
&
brad
topo
for
organizing
and
running
the
docs
translation
Sprint's
at
Yukon,
Shanghai
and
I.
Even
thought
would
like
to
thank
Ben
the
elder
and
all
others
who
worked
on
kind.
I
wanted
to
give
a
shout
out
to
the
work
done
to
create
kind.
Nice
work
I've
experimented
to
get
kind
working
with
multiple
clusters
so
that
we
can
use
it
to
test
Federation
v2
with
multiple
closest
for
Devon
CI
and
I'm
very
impressed
with
it.