►
From YouTube: Kubernetes SIG CLI 20220504
Description
Kubernetes SIG CLI Bi-Weekly Meeting on May 4th, 2022.
Agenda and Notes: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit#bookmark=kix.lv7m4vxqfnok
A
So
let
us
start
hello,
my
name
is
sean
sullivan.
I'm
a
moderator
today
for
the
may
4th
edition
of
the
six
sea
life
bi-weekly
meeting
and
we're
going
to
get
right
into
some
announcements,
since
we
have
quite
a
few
of
them,
so
kubernetes
124
stargazer
has
been
released
yesterday
the
first
release
and
we
are
now
naming
our
releases.
A
It
seems
so
this
one's
called
stargazer
there's
a
link
there
to
for
the
blog
post
for
what's
included
in
this
release,
and
we
within
a
couple
weeks
we've
got
another
big
event
which
is
kubecon
eu,
which
is
coming
up,
may
16th
through
the
20th
there's
a
contributor
summit
on
monday.
A
Please
check
out
the
link
if
you're
interested
in
that
monday
may
16th
so
believe
it
or
not.
The
125
enhancements
freeze
is
going
to
is
coming
up
and
we'll
probably
there
there
isn't
a
specific
date
yet,
but
it
was
mentioned
that
it
was
going
to
be
after
kubecon
eu,
and
so
it's
probably
going
to
be
at
the
end
of
this
month.
It's
already
coming
so
we
just
had
124
come
out
and
we
now
have
125
the
enhancement
freeze
coming
up
shortly.
B
A
Okay,
so
the
there's
also
a
survey
about
the
renaming
of
the
kubernetes
kubernetes
master
branch
to
maine.
There's
a
couple
links
there.
Please
check
those
out,
including
the
related
cap,
there's
also
a
we'd
like
to
for
our
in
order
to
spread
the
load
for
the
kubernetes
images,
we're
going
to
be
using
a
different
url
at
registryk8.io
instead
of
k8s.gcr.io.
A
And
there's
some
more
information
about
that,
including
a
couple
of
the
links,
if
you'd
like
to
know
more
about
that
there
is
a
third-party
security
audit
for
kubernetes.
So
you,
if
you
get
contacted
about
that,
this
there's
some
links
there
to
for
some
more
information
about
that
security.
Audit
on
kubernetes.
B
Oh
yeah,
maybe
worth
mentioning
that
we
will
be
presenting
during
kubecon
eu
in
two
weeks
we
have
a
session
with
katrina,
eddie
and
myself,
where
we'll
be
talking
quickly
about
what
we've
done
over
the
past
couple
of
months
in
sixth
july
I'll
I'll
link
the
this
cat
url
after
I'm
done
talking,
we
will
be
talking
about
where
we
are
where
we
want
to
go.
Anyone
that
is
present
during
kubecon
is
more
than
welcome
to
say
hi
or
high-five
us.
A
A
Okay,
so
this
is
the
part
of
our
meeting
where
we'd
like
to
find
out
more
about
our
colleagues,
if
you're,
if
you're
willing,
so
if
you're
willing-
and
you
haven't
been
to
a
six
cli
meeting
before
or
it's
been
a
long
time,
please
introduce
yourself
and
again.
This
is
completely
voluntary.
C
A
Great,
that's
it's
a
pleasure
to
meet
you
and
there
are
some
or
should
be
some
customized
experts
here
at
the
meeting
today.
B
I'll
go
my
name's
told
me
o'neill
I've
attended
a
couple
of
meetings,
but
I'm
going
to
be
participating
a
lot
more.
I
work
at
sas
institute
in
cary
north
carolina
and
we've
been
using
customized
for
a
few
years
now,
as
well
as
kubernetes,
I'm
one
of
the
internal
subject
matter,
experts
on
customized,
but
I
still
have
a
lot
more
to
learn,
so
I
just
want
to
get
involved
in
this
community
and
I'll
eventually
grab
an
issue
and
start
participating.
Thank
you.
D
Yeah
I'll
go
next,
I
work
with
sean.
My
name
is
alex.
I
work
with
sean
at
google
on
the
api
machinery
team.
I've
attended
a
few
meetings
but
probably
be
participating
some
more
in
the
future
too.
B
A
Okay,
why
don't
we
move
on
to
the
next
topic,
which
is
going
to
be
discovery,
cache
busting,
and
so
I
will
start
off
this
presentation.
So
first
of
all
is
is
jeffrey
here.
A
Okay,
great,
I
just
want
to
make
sure
that
the
the
second
at-bat
is
is
available.
So
let
me
first
ask
is
the
presentation
showing.
A
A
Clients
use
to
find
out
which
apis
the
the
api
server
supports,
and
it's
a
two-step
process
where
there's
an
initial
root
api
query
at
slash,
api
or
slash
apis,
and
that
returns
the
group
versions
and
then
from
that
list
we
can
then
query
specific
group
versions
to
find
out
what
resources
are
supported,
and
so,
with
this
two-step
process,
we
can
construct
all
of
the
the
gvrs.
A
The
group
version
resources
that
an
api
server
supports,
but
but
this
information
can
change
and
when,
when
can
this
discovery,
information
change
well,
probably
most
commonly
it
changes
when
crds
are
applied
or
deleted
or
updated,
but
it
can
also
change
when
an
api
server
is
upgraded
to
the
next
version
or
downgraded
changed
in
any
way.
And
finally,
at
least
from
from
my
understanding
this,
the
apis
can
change
when
an
aggregated
api
server
is
added
or
removed.
A
A
This
information
is
cached
in
memory
for
most
controllers,
but
there's
no
mechanism
to
signal
to
clients
when
the
discovery
information
is
out
of
date
when
it
has
changed,
and
so
the
current
solution,
which
is
pretty
inadequate,
is
that
at
for
some
specific
period
of
time
we
then
re-request
the
entire
set
of
apis
and
for
our
disk
discovery,
client,
that's
six
hours,
it
used
to
be
ten
minutes,
but
I
think
within
124
we
changed
it
to
six
hours,
which
kind
of
kicked
the
can
down
the
road
and
an
example
of
this
inefficiency
is,
is
kind
of
a
whenever
you've
you've
done
a
verbosity
level
on
coop
control,
where
you
say:
minus
v
equals
seven
or
above
where
you
can
see
every
single
one
of
the
requests
to
the
api
server.
A
A
Is
this
client
is
asking
for
all
of
the
discovery
information
again
because
it's
been
more
than
six
hours,
and
so
what
we're
attempting
to
to
do
is
create
a
more
efficient
system,
and
so
cash
busting
allows
the
client
to
determine
if
an
api
is
out
of
date
and
needs
to
be
re-requested,
and
so
also
with
this
system
we
we
will
never
have
stale
information.
There
won't
be
this
time
with
like
six
hours
where
the
api
changed
and
we
potentially
don't
know
about
it.
A
So
how
does
cache
busting
work?
I've
included
a
link
to
the
discovery
to
this
design
dock.
So
basically
it
works
because
the
api,
this
discovery
mechanism
is
two-step
the
initial
root
api
query
at
slash,
api
or
apis.
It
will
now
include
a
hash
with
each
downloaded
group
version,
and
so
with
that
hash
we'll
be
able
to
determine
whether
or
not
the
group
version
is
out
of
date.
A
So
so
I
don't
so
I'm
hoping
that
so,
let's
move
on
to
to
jeffrey
and
I'm
gonna
make
him
a
co-host
so
that
he
can
start
presenting
and.
B
For
you
to
just
like
play
a
video
on
your
site
so,
like
I
probably
don't,
need
a
host
sure.
B
A
E
A
B
Now,
we'll
notice
here
is
that
we
have
a
bunch
of
requests
from
two
pedal
sets
to
the
api
server.
This
is
basically
the
discovery
store.
Cutoff
first
means
to
get
the
list
of
food
versions
class
server,
and
then
it
was
for
each
version.
B
It
would
match
the
resources
that
are
published
at
that
of
so
we'll
actually
see
what
this
looks
like
by
we
can
send
for
themselves
and
seeing
that
it
has
a
list
of
groups,
and
you
pass
this
list
of
belong
to
group
and
cuddle
basically
takes
these
convergent
names
and
sends
requests
there
to
get
the
full
list
of
resources
published
at
that
version.
So
for
let's
say,
for
example,.
B
B
The
drawback
of
this
caching
mechanism
is
that
we
can't
really
update
the
cache
without
trimming
this
disturbing,
and
we
have
this
storm
basically
set
to
trigger
once
every
six
hours,
because
it
is
quite
expensive
and
what
that
means
is
that
you
could
only
have
steel
resources
to
circumvent
the
fact
that
we
don't.
We
might
have
sterile
resources,
I'm
never
going
to
work
around
where
you
kind
of
request,
a
resource
that
does
not
exist
or
that
the
cuddle
does
not
currently
know
about.
B
Then
they'll
attempt
to
perform
the
discovery
storm
just
to
see
potentially
like
a
new
resource.
Whenever
we
read
it.
B
B
B
Obviously,
the
advantage
of
this
is
that
if
we
actually
applied.
B
Now,
let's
showcase
what
happens
with
cash
busting,
so
with
cash
busting,
we
add
a
hash
to
recruit
version
published
as
slash
nps.
What
that
looks
like
is
something
like
this
here
and
then
right
here
and
this
hash.
You
can
use
in
the
request
by
clicking
the
service
resources
and
we
can
just.
B
This
is
great
because
now,
just
based
on
the
slash
vps
discovery,
endpoint
we're
able
to
know
whether
the
resource
has
changed
based
on
its
hash
and
a
single
request
to
the
new
discussion
apis
is
able
to
provide
us
information
on
all
the
conversions
that
have
changed
rather
than
before.
We
don't
really
know
if
location
has
changed
unless
we
send
requests
to
the
corresponding
reversion
points.
B
That
these
urls
kind
of
act
as
static
resources
and
on
ecast
server,
we
publish
in
the
header
and
ask
them
to
cache
the
hash
of
these
resources,
is
computer
based
on
the
underlying
content
served
by
your
l,
so
the
resources
in
this
case
the
resources
served
by
a
different
group
direction,
which
means
that
if
the
contents
of
the
rubric
has
not
changed,
then
the
password
will
miss
it.
B
Notice
is
that,
right
now
we
have
an
unknown
resource,
but
instead
of
a
spec
chain
causing
this
memory
storm
we're
only
kind
of
requested
to.
B
B
The
cubicle
server
indicates
that
these
hash,
these
published
sort
of
documents
where
patches
can
be
hashed
forever.
My
attachment
speed
client
would
just
see
that
see
the
hashes
match,
let's
sew
directly
from
our
cache,
rather
than
the
shrinking
request.
B
A
couple
things
to
note
here
is
that
the
hash
is
calculated
based
on
the
hash
of
a
conversion
resource
endpoint,
which
we
discussed,
but
that
also
means
that
if
the
reversion
has
not
changed
even
let's
say
you're
treating
kubernetes
versions,
if
that's
a
cost
upgrade
that
cluster
upgrade
will
not
affect
the
hash
for
the
cash,
so
cute
cuddle
can
basically,
when
a
russian
upgraded
key
cutter
can
still
use
its
cache
version,
still
keep
using
this
previous
cache
version.
B
This
is
actually
intentional
now,
because
we
will
now
try
to
refresh
the
cache
by
pinging
the
slash
because
slash
api's,
like
every
request,
and
that
means
we'll
always
have
a
fresh
list
of
resources
whenever
we
run
the
keyboard
command
print,
the
old
mechanism,
I
think
six
hours
before
the
refresh
attach-
and
this
is
all
thanks
to
the
fact
that
right
now
triggering
this
coming
stone
is
much
cheaper,
because
if
the
hashes
match
the
ups
and
down
caps
version,
then
no
additional
requests
sent
to
the
cancer
all
right.
A
Jeffrey
okay,
so
so
it
looks
like
we
think
that
we've
designed
a
more
efficient
discovery
system
with
this
with
this
cache
busting.
Does
anybody
have
any
questions
about
it?.
B
A
Correct
so
we've
been
working
on
our
own
branch
of
kubernetes
and
we
we're
basically
working
on
the
design
dock
now
and
about
this
and
we've
started
the
cap.
So
we
also
have
a
presentation
at
sig
api
machinery
here
in
about
an
hour
and
we've
been
coordinating
with
the
api
machinery.
Actually
most
of
the
work
is
sig.
Api
machinery
work.
It's
api
server,
work.
A
A
You
know
there's
like
70
or
80
of
them,
when
you're
trying
to
do
just
one
get
or
one
particular
coupe
cuddle
command,
and
because
this
this
system
will
just
blindly,
after
every
certain
amount
of
time,
grab
the
entire
apis.
You
it's
it's
pretty
inefficient.
A
A
So
that's
actually
that's
possible,
say
if
you
do
a
coupe
cuddle
proxy,
you
could
just
do
a
curl
command
against
your
your
api
server
and
that's
actually
one
of
the
best
ways
to
see
what's
happening
here
and
see
what
these
what
these
apis
look
like.
A
So
if
you
do
a
coupe
cuddle
proxy
and
then
just
do
a
curl
against,
for
instance,
whatever
the
api
or
sorry
the
ip
is,
and
the
port
slash
apis,
that'll,
actually
return
and
you'll
be
able
to
see
that
it
doesn't
have
the
hash
for
all
of
the
group
versions
and
then
for
one
of
them
you
could.
You
could
then
do
a
curl
against
the
endpoint
slash
apis
for
exist.
For
example,
one
is
like
apps
v1,
which
is
where
the
you
know
the
deployments
live.
A
The
demon
sets
live
controller
revisions,
there's
a
entire
set
of
resources
that
live
in
that
group
version,
and
you
could
actually
see
what
the
api
server
is
returning.
It's
actually
one
of
the
best
ways
to
do
it,
just
as
you're
mentioning
by
curling
against
the
api
server
after
having
done
a
coupe
cuddle
proxy.
F
You
get
where
is
the
so
cool
currently
use
the
coupe
proxy
would
cui
have
to
extract
the
hash
from
the
discovery,
cache
and
use
that
in
its
calls,
or
is
the
coup
proxy,
attaching
the
patch
to
all
payloads
for
us.
A
So
it's
it's
going
to
depend
on
if
you're
using
the
client
go,
cached
discovery,
client,
so
the
code
that
is
checking
the
hashes
lives
in
client
go
is
that
did
I
mention
that
correct,
alex
or.
D
Yeah,
I
think
I
can
answer
that
a
little
more
specifically,
so
we
we're
using
standard
http
headers
to
communicate
to
the
clients
that
the
response
should
be
hash.
So
our
cache
sorry.
So
we
include
the
hash
into
the
etag
header
and
the
response,
and
we
also
include
a
cache
control
policy
header.
That
tells
you
it's
immutable
and
we
also
include
vari
header.
That
says
you
should
only
use
this
cache
response.
If
the
content
type
is
the
same.
D
So
if
you're
using
standard
http
caching
practices,
then
it
should
just
work
and
you
wouldn't
even
have
to
hit
the
server.
If
you
have
the
same
url
since
the
hash
is
in
the
query
parameter,
you
can
just
hit
your
local
cache,
but
if
you
did
hit
the
server
with
an
if
non-match
header
and
an
e-tag,
then
it
can
tell
you
304
not
modified,
so
all
the
standard,
http
responses
you
might
expect.
F
D
F
C
And
the
question,
can
you
actually
bypass
and
say
hey,
don't
they
wear
a.
D
Right,
it's
entirely
up
to
the
client
if
they
want
a
a
new
copy
of
the
data
from
the
server.
If
they
just
don't
include
the,
if
none
match
header,
then
the
server
will
reply
with
the
freshest
data
that
it
has.
C
From
command
standpoint
like
there's
a
debug
or
something
which
basically
completely
override
the
logic
for
a
local
cache
discovery.
D
C
D
A
So
the
if,
if
you've
heard
of
the
rest
mapper
the
rest,
mapper
wraps
the
this
discovery,
client,
and
there
actually
is
now
the
the
interface
that's
generally
used
as
resettable
rest
mapper,
and
so
you
can
call
reset
on
the
restmapper,
which
would
so
so.
This
is
at
a
higher
level
of
abstraction
than
alex
is
talking
about.
So,
if
you're,
if
you're
using
the
rest
mapper
and
you
call
reset
it-
will
it
will
invalidate
the
cash
and
grab
everything
did
that
make
any
sense?
A
Cool
well,
I
appreciate
that
jeff
for
creating
that,
and
so
why
don't
we
move
on
to
the
next
topic
unless
there's
any
more
questions,
unless
we'd
like
to
continue
on
the
discovery
cash
blessing,
are
we
okay
to
move
to
the
next
topic.
G
G
Hound
charge
registries
and
there's
a
couple
of
discussion
points
that
I
just
wanted
to
bring
up
and
run
it
by
natasha
and
katrina.
G
So
so
two
things
I
think
this
is
so
this
is
requested
by
some
folks
via
you
know,
github
issues
also
there
there's
some
customers
or
google
customers
as
well
they're
interested
in
having
the
support,
and
it
seems
like
there's
a
few
customized
users
here
as
well,
so
maybe
they
can
chime
in
so
the
pr
is
there.
I've
got
a
quick
demo.
I
can
share
how
this
works
really
kind
of
a
non-event.
It
just
works
when
you
reference
an
oci
and
then
we
see
a
url.
G
The
pr
is
enhancing
the
built-in.
I
know
that
we're
we're
thinking,
you
know
about
you're,
really
kind
of
freezing
the
built-in
enhancements.
This
one
is
a
pretty
marginal
one.
I'm
happy
to
also
port
this
to
the
function
as
well.
G
So
so
one
point
of
discussion-
and
I
guess
I'd
like
to
get
a
thumbs
up-
that
we
are
okay,
doing
a
marginal
feature,
enhancement
in
the
built-in.
I
think
the
whole
like
total
lines
of
code,
I
think,
is
like
five
or
six.
So
that's
not
that's
not
a
lot.
G
So
that's
point
in
discussion
number
one
and
then
the
point
of
discussion
number
two
is
generally
oci.
Repositories
are
secured
with
authentication,
and
that
presents
a
little
bit
of
a
problem
because
but
but
I
do
have
a
solution
for
it,
so
you
don't
want
to
put
creds
and
customization
dot
yamo.
G
G
So
I'd
like
to
revert
that
and
and
by
default
reuse
the
configuration
on
the
system
now
anytime,
you
get
access
to
something.
Maybe
you
didn't
have
access
to
before.
G
That's
a
bit
of
a
security
discussion
seems
safe
to
me
because
you
know,
and
I've
also
picked
people
on
slack
and
the
customized
channel
didn't
really
hear
anything
back.
So
those
are
like
two
main
points
of
discussion
that
I
just
wanted
to
bring
up
and
see
how
katrina
and
natasha
felt
about
this
change.
G
And
then
there's
a
third
point,
which
is
how
would
you
like
me
to
set
up
test
infrastructure
for
oci?
You
know
where
does
it
go
because
http
repositories
are
available?
You
know
there's
one
from
bitnami,
and
so
we
have
unit
duster
using
those.
I
don't
think
I
didn't
find
an
oci
one
to
reuse,
so
I
created
my
own,
but
that's
not
a
you
know.
Sustainable
solution
anyway,
pause.
H
Sure
so
I
guess
to
point
number
one,
which
is
that
this
is
a
marginal
change.
I
thought
about
that
a
little
bit
and
I
think,
although
on
principle,
we're
trying
not
to
add
a
ton
of
features
while
we're
migrating
it
I'm
okay
with
a
very,
very
small
change.
This
is
like
two
lines:
it's
just
string
parsing.
H
G
Yeah,
it's
all
in
the
p,
it's
all
in
the
pr.
So
I
guess
I
I
guess
I
could
make
a
snarky
snarky
remark,
which
is:
I
don't
really
have
to
put
the
test
infrastructure
in
there,
but
you
know
that
would
not
or
we
could
save
the
test
infrastructure
for
the
function.
G
I
think
it's
good.
You
know
it's
good
to
test
this
out,
but
it
doesn't
have
to.
It
doesn't
have
to
be
added
to
this
pr.
E
G
G
E
That's
exactly
why
we
can't
accept
more
like
that.
E
E
Part
of
the
consideration,
as
well
as
if
it
requires
a
great
deal
of
investment
to
make
sure
that
nothing
else
is
going
to
break
when
we
change
the
code
because
it's
under
tested,
if
we
have
to
invest
in
additional
infrastructure
or
if
we
have
to
make
a
lot
of
changes
around
another
untested
area
like
if
auth,
if
the
part
of
auth
that
you're
affecting
isn't
properly
tested
either,
which
sounds
like
it
might
not
be
because
we're
speculating
about
the
effect
of
those
environment
variables.
E
Then
that's
another
point
of
hesitance
for
me.
Given
that
we're
trying
to
freeze
this
old
implementation
and
make
a
more
robust
one
in
the
new
in
the
new
repo.
G
G
Chart
registry
to
test
against
you
know:
that's
just
a
universal
problem
for
built-in
or
the
function,
so
I
I
I
would
suggest
that
we
do
set
that
up
for
ourselves
and
that's
reusable
reusable
investment.
H
H
G
That
sounds
good
is
the
code
between
the
built-in
and
the
function.
I
guess
I
can
look,
I
mean,
but
while.
G
G
Okay
cool,
so,
just
to
summarize,
you
know
the
sentiment
of
the
maintainers
here,
we're
cool
with
the
small
change
we
we
do
want
to
have
that
in
both
places-
and
we
are
all
good
with
investment
and
test
infrastructure.
As
long
as
it's
testing
both
the
function
and
to
build
that.
G
Okay,
wonderful,
so
to
have
that
infrastructure.
G
I
don't
personally,
have
you
know
so
I
I
can
set
that
up
on
the
google
side,
but
I
think
we're
using
you
know
a
cncf
project
or
something
like
that.
So
I
will
I'll
need
some
help
getting
access
to
that.
I
don't
currently
have
access
to
set
up
an
artifact
registry,
so
zero.
H
G
Yeah
that
sounds
great
yeah
I
just
and
I'm
happy
for
somebody
else
to
set
up
set
up
the
artifact
registry
and
do
all
that
so
cool
all
right.
Well,
thank
you
for
your
help.
This
is
great.
A
Okay,
so
we've
got
another
helm
question
here.
I
think
vitale's
joined
us
and
I
think
he
has
a
question
which
has
to
do
with
customize
as
well.
Is
that
correct
vitaly?
Yes,.
C
Hi
everyone,
so
basically,
this
is
small
feature
just
add
open
up
the
lock
the
feature,
basically
passing
the
plugs
to
the
helm
chart
from
the
command
line.
Basically,
it
opens
up
the
integration
between
like
when
they
a
lot
of
things.
Mismatch
like
older
health
chart
doesn't
support
newer.
Now
this
version
and
so
on
and
basically
simple
feature,
but
it's
open
say
a
lot
of
hurdles.
C
I
personally
have
with
the
deployments
and
feels
like
a
no-brainer
to
me,
but
a
I
have
a
pr
on
my
personal
branch,
because
I
don't
care,
I
don't
have
access
to
your.
So
I
can.
C
There
is
a
actually
a
link
on
priority,
not
my
openness.
We
can
look
at
this
or
it
hasn't
been
reviewed.
Yet.
H
Yeah,
so
I
read
this
tissue
right
before
this
meeting
so
you're,
proposing
that
we
have
another
field
in
help.
Charts
that
accepts
like
arbitrary
strings
that
become
flags
to
helm.
Template
is
that
correct.
C
H
So
we
actually
used
to
have
this
feature
in
the
old
version
of
the
helm
generator
and
we
ended
up
having
to
remove,
remove
it
due
to
like
security
vulnerabilities,
and
so
I
don't
think
we
can
reintroduce
this
feature
exactly
as
it
is.
What
we're
playing
trying
to
do
in
the
next
iteration
function
form
is
to
have
fields
for
every
single
flag
that
helm
template
has
so
once
we've
done,
that
does
that
also
solve
your
issue?.
B
A
So
is
there
any
other
questions
or
comments
about
that
particular
issue?.
A
Okay,
so
maybe
we
could
move
on
to
stand-ups
if
anybody
is
willing
to
to
do
a
stand-up.
F
For
kui
I'll
be
releasing
an
update
pretty
soon
waiting
for
an
electron
bug,
fix
like
it's
pushing
it
out
a
little
bit
but
fix
a
few
bugs,
so
the
community
had
reported.
So
that's
coming,
hopefully,
as
soon
as
electron
fixes
a
bug.
D
B
Nick
do
we
have
the
date
for
125
and
hansen's
trees,
yet.
B
B
B
E
A
quick
reminder
that
today,
we
also
have
the
km
function,
subproject
meeting
that
takes
place
half
an
hour
after
the
end
of
this
one.
So
anyone
interested
in
care
and
functions
is
welcome
to
attend.
E
E
Katrina
half
an
hour
after
the
the
end
of
this,
whatever.
A
Okay,
well,
why
don't
if
we
don't
have
any
more
standups,
why
don't
we
give
everybody
a
few
minutes
back?
Is
there?
Is
there
anything
else
before
we
close.
A
Okay,
well
thanks
for
joining
us,
looking
forward
to
seeing
you
in
a
couple
weeks
and
hope
to
see
a
whole
bunch
of
you
at
kubecon
eu,
if
possible,
in
valencia,
spain.