►
From YouTube: cf-for-k8s Monthly SIG [August 18, 2020]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
anticipate
we'll
be
a
little
thin
on
the
ground
today
because
of
kubecon
and
because
of
vacations.
So
that's
what
this
week.
B
C
As
tell
me,
shut
up
as
soon
as
we
get
to
the
point
where
you
actually
want
to
sort
of
have
important
stuff
has.
B
C
Here,
attended
qcon
online.
A
C
A
I
I
can't
speak
for
the
people
at
sus.
I
mean
susan,
went
all
in
and
there's
a
lot
of
the
organization
that
is
participating
more
actively.
It's
just
really
hard
from
the
west
coast,
as
we've
seen
from
various
people
complaining
about
time
zones,
but
yeah,
it's
just
hard
I've.
I
I
will
catch
a
few
of
the
talks
if
I
can
but
yeah
the
social
aspect
is
going
to
be
hard
and
the
conversations
are
going
to
be
harder.
A
I
found
I
found
cf
summit
virtual
to
be
very
valuable,
but
with
kubecon
being
so
huge,
I
don't
know
if
it
would,
it
would
have
the
same
feel
because
we
had
just
a
single
track
and
we
sort
of
a
slightly
we're
a
smaller
community.
That
knows
each
other
better,
and
so
we
were
able
to
connect,
but
I
I
find
virtual
kubecon
a
bit
disorienting.
To
be
honest,
I
think
we'll
call
this
a
quorum
of
interested
parties
in
kubernetes
and
cloud
foundry.
A
So,
as
I
said
before,
welcome
to
the
kubernetes
sig
tuesday
august
18th
2020.
we're
already
recording,
so
we
don't
have
to
kick
that
off.
I
would
like
to
thank
benjamin
fuller
for
agreeing
to
talk
to
us
about
logging
today,
and
so,
if
you're,
ready,
ben
I'd
be
happy
to
just
turn
this
over
to
you.
B
Yeah,
I
I
should
be
ready-
or
at
least
as
ready
as
I'll
ever
be.
Let
me
see
if
I
can
share
my
screen
here,
so
I
actually
have
these.
I
post
posted
the
slide
show,
and
I
can
do
it
again
in
the
chat
and
it
should
have
notes
to
go
along
with
it.
I
believe
I
haven't
looked
at
the
the
specific
file
yet
too
closely,
except
for
the
pdf,
and
I
don't
believe
that
one
has
notes,
but
the
the
two.
I
would
hope
that
the
two
other
ones
do
have
notes.
B
So
I'm
ben
I'm
ben
fuller,
I'm
the
anchor
of
vlogging
egress,
the
formerly
the
logger
gator
team.
I
just
wanted
to
talk
a
little
bit
about
logging
like
what
was
the
past.
What
or
I
should
say
what
is
the
present
in
somewhat
in
a
lot
of
ways
of
logging
on
cf
in
terms
of
like
the
previous
bar
architecture?
B
What
have
we
learned
during
that
period
of
time
and
what
have
we
created
and
how
are
we
bringing
that
forward
into
cf
for
kate's?
And
so
I
think
first,
the
the
first
thing
I
want
to
talk
about
is
like
oftentimes.
It's
easy
to
look
at
one
small
section
of
cf
and
say
this
is
what
cf
is,
and
this
is
who
we
serve,
but
we
actually
serve
a
lot
more
than
a
single
person
when
we're
making
cf
for
logs.
B
B
But
it's
not
the
only
subset
of
people
we
do
things
for,
and
one
of
the
other
big
things
that
we
do
is
serve
platform
managers,
so
people
who
deploy
cf
often
want
to
take
all
of
their
logs
and
put
it
somewhere
else,
whether
that
be
like
a
bucket
somewhere
or
an
external
observability
tool,
or
something
else
they
want
to
take
all
their
logs
and
you
rest
it
off
of
the
platform,
and
this
is
a
use
case
that
comes
up,
that
we
hear
a
lot
of
problems
for
a
lot
of.
B
It
comes
around
scaling.
A
lot
of
it
comes
around
like
making
sure
that
the
platform
can
integrate
with
the
tool
that
they're
using,
but
it's
something
that
we
hear
come
up
a
lot
and
it's
something
that
we've
had
trouble
serving
in
the
past
and
in
turn
that
turn
those
terms.
The
the
group
that
actually
wants
these
logs
egressed
isn't
always
the
group
that
is
deploying
the
cf.
Sometimes
you
have
a
large
group
of
people.
You
have
application
developers
on
that.
B
We
also
have
work
with
partners
and
service
teams,
partners
being
often
the
kind
of
groups
that
observability
teams
work
with.
So
we're
talking
about
people
who
develop
pains
and
observability
tools
to
view
logs,
and
we
are
also
talking
about
services
in
terms
of
like
add-ons
to
the
platform.
B
People
who
make
service
brokers
for
cf
often
want
to
have
logs
exposed
to
developers
as
well,
and
that's
something
which
people
we
work
with
to
try
and
make
sure
those
logs
are
exposed
and
then,
of
course,
component
authors.
Other
people
who
are
making
cf
who
want
to
make
sure
their
logs
see
copy
logs,
router
logs,
all
sorts
of
things
get
into
the
live
stream
as
well
and
are
visible
to
application.
Developers
are
sent
to
external
compliance
teams
and
are
sent
to
are
visible
to
platform
managers.
B
All
these
sorts
of
things,
and
so
I
wanted
to
briefly
take
a
look
at
this,
and
this
is
this:
is
the
current
architecture
for
logger
gator,
I'm
not
going
to
get
into
too
much
detail
in
this
picture?
B
I
just
wanted
to
show
it
briefly
to
say
this
is
what
it
is,
but
it
it
mostly
sums
to
we've
got
a
vm.
We've
got
a
thing
that
wants
to
emit
logs.
This
could
be
diego
is
a
big
example
of
this
logs,
which
I
say
in
this
case,
is
I
say,
stock
because
it
could
be
metrics
go
to
an
agent,
those
things,
those
logs
or
metrics,
go
to
other
agents.
B
B
It
could
be
all
sorts
of
things,
but
it's
it's
some
user
of
the
logging
system
in
general,
who
wants
to
get
the
logs,
and
so
in
general,
what
we've
learned
or
some
of
the
things
we've
learned
over
the
years,
is
that
aggregation
isn't
scalable
or
at
least
it's
difficult
to
scale
in
a
way
that
isn't
very
redundant.
B
So
we
have
dopplers
in
in
the
original
diagram.
Here
we
have
doppler's
traffic
controllers
and
reverse
lock
proxies,
and
while
this
is
something
that
has
helped
in
some
ways,
it
also
has
run
into
scaling
limitations,
and
it's
something
that
even
when,
even
though
it's
so
supposed
to
make
it
easier
to
consume
from
the
system,
it
usually
there's
usually
a
separate
sense
of
like
redundant
scalability
beyond
that,
and
so
one
of
the
things
we've
been
working
towards
is
moving
towards
agent-based
architectures.
B
We
did
this
with
scalable
syslog,
and
we've
done
this
even
more
with
these
new
syslog
agents,
and
this
is
something
that
has
scaled
a
lot
higher
as
a
top
level,
but
it's
just
also
a
lot
easier
for
a
lot
of
our
customers
to
scale
than
the
old
fire
hose
patterns.
B
People
would
look
at
the
fire
hose
and
they'd
be
like
how
much
how
much
dopplers
do.
I
need
how
much
traffic
controllers
do.
I
need
how
how
big
should
they
be,
and
this
was
always
very
difficult
to
answer,
whereas
when
we're
looking
at
agents,
it's
only
about
you
know
how
much
footprint
do
they
consume
on
a
specific
yum
that
is
producing
logs,
and
how
do
I
make
sure?
I
have
a
architecture
to
receive
those
logs,
something
that
you
usually
need
anyways
to.
B
Service
logs
into
anything
substantially
big,
something
we've
also
learned,
is
that
people
have
a
lot
of
difficulty
working
with
the
fire
hose
because
I'm
not
gonna,
say
oh
shoot.
B
Proprietaries
may
be
a
little
bit
of
a
stretch
here,
but
the
logger
gator
interface,
the
the
the
whether
it's
the
grpc
fact
of
it,
or
just
the
fact
that
we're
talking
about
envelopes
as
a
concept
is
something
that's
very
sticky,
but
also
like
has
a
higher
barrier
to
entry
than
a
lot
of
people
would
like,
and
it
often
leads
to
difficulty
changing
things,
because
people
spend
a
lot
of
time
working
on
the
nozzle
and
then
we
change
things
and
people
aren't
too
happy
about
it.
B
We've
had
some
people
using
http
over
json,
it's
not
as
performant
as
syslock,
but
it's
it's
something
that
is
easy
to
integrate
with,
and
it's
just
something
that
we've
been
trying
to
work
towards
making
sure
our
interfaces
are
widely
adopted.
The
interfaces
we
choose
are
as
widely
adopted
as
possible
and
are
as
little
custom
as
possible.
We've
learned
that
people
want
to
filter
things,
so
the
the
oldest
version
I
can
think
of
of
the
fire
hose
you
connect.
B
You
say
I
want
everything
and
you
get
everything
and
there's
some
a
degree
of
filtering,
but
a
lot
of
consuming
the
the
all
the
the
logs
of
the
entire
platform.
While
it
is
one
big
use
case,
it
isn't
necessarily
always
your
use
case.
Sometimes
you
want
smaller
subsets
of
platform,
logging
data
and
that's
something
that
we
haven't
as
been
as
good
as
sometimes
in
the
past,
at
exposing
to
people.
B
One
thing
we've
learned
as
well
is
that
blogs
and
mentors
are
different,
something
that
one
of
the
big
problems
is
that
metrics
are
compressible
and
treating
them
like
logs
doesn't
treat
them
as
compressible,
which
often
leads
to
a
large
amount
of
load
going
through
the
system.
But
we've
also
learned
that
as
prometheus
has
come
through
and
other
other
tooling
has
come
through.
B
There's
a
lot
different
methodology
on
how
to
make
metrics
useful
that
you
don't
get
by
treating
the
monk
logs
and
that's
something
you
would
like
to
do
more
in
the
future,
and
we
want
to
try
and
use
more
open
source
tooling,
rather
than
building
our
own
tools.
So
we've
been
looking
towards
fluent
fluency
fluent
bit
stuff
like
that
to
try
and
be
able
to
leverage
the
open
source
tools
to
deliver
cf
outcomes,
rather
than
always
for
seeking
to
deliver
the
outcome
as
quickly
as
possible.
B
I
think
that
I
hope
that's
something
that
we
start
to
see
benefits
going
through
going
forward,
but
we've
also
seen
this
in
other
parts
of
cf
as
we
move
towards
to
see
up
for
caves.
Even
moving
towards
kubernetes
is
kind
of
one
of
the
outcomes
of
trying
to
leverage
more
open
source
products,
and
we've
also
learned
that
something
that
we've
always
kind
of
known
is
that
context
gives
large
value.
Knowing
that
hello
world
exists,
as
a
lock
is,
has
some
degree
of
value.
B
But
it's
a
lot
more
valuable
to
know
that,
like
my
application
is
saying
hello
world-
and
it
belongs
to
me-
and
it's
in
this
place
and
that's
what
we've
learned
over
the
years
and
to
oh
sorry,
there's
one
more
point.
We
also
want
to
use
this
opportunity
to
try
and
take
some
of
this
architecture
that
is
complicated
to
platform
users.
B
It's
complicated
to
people
managing
like
building
the
platform
and
it's
all
sorts
of
complicated,
to
understand
in
all
sorts
of
ways
and
try
and
look
at
an
interface
first
approach
and
then
try
and
fulfill
that
interface,
rather
than
taking
all
the
specifics
over
a
lot
of
which
are
either
unnecessary
or
built
for
backward
compatibility
reasons
which
aren't
as
prevalent.
B
So
hopefully
we
can
so
one
of
the
ways
to
look
at
what
we're
doing
right
now
is
that
we
have
inputs
coming
into
fluency
and
outputs,
and
this
is
our
current,
like
architecture,
diagram
things
come
into
fluency,
we
send
it
to
outputs
and
right
now
we
only
have
one
output,
which
is
block
cache,
and
we
only
have
about
two
inputs,
but
I
think
the
big
way
we
want
to
think
about
logging
going
forward
is
as
cloud
foundry.
We
have
a
feature.
B
The
feature
is
drains
drains
are
our
contact
and
we
would
like
to
expose
those
to
the
outside
world
and
in
so
much
that
we
expose
drains.
We
would
like
to
expose
interfaces
to
anyone
within
the
platform
to
emit
their
logs
in,
in
a
way
that
we'll
be
able
to
work
with
cf
drains.
B
We've
been
working
on
getting
logs
in
that
are
annotated
with
the
correct
data
to
and
then
we'll
at
some
point
work
to
make
sure
those
get
sent
to
various
different
destinations.
Besides
like
cache
but
like
first
and
we've
been
working
on
a
the
fluid
formatted
ingress
to
try
and
work
on
the
important
use
case
of
injected
logs
into
livestream.
B
And
so
there
are
some
things
we're
hoping
to
work
towards
in
the
future.
I
there's
some
vague
thoughts
of
platform
logs
in
the
past
platform.
Locks
and
application
locks
were
served
by
entirely
different
architectures.
We
have
syslog
release
that
has
been
managed.
B
Managed
by
a
syslog
team
and
at
some
point
and
at
some
points
managed
by
the
logging
team
itself-
and
this
is
something
that
given
that
we're
now
on
the
same
architecture
on
the
same
level
playing
field
with
applications,
it
might
be
useful
to
consider
how
we
can
consolidate
these
two
to
be
more
consistent
and
more
more
usable
for
everyone.
But
this
is
something
that
we
haven't
had
a
lot
of
time
to
work
on,
but
let
alone
put
specific
thought
into.
B
Yet
we
would
like
to
at
some
point
possibly
could
consider
more
input
methods
for
how
we
can
expose
things
to
to
to
components
and
services,
especially
as
people
start
to
do
more
kubernetes
as
things
like
produce,
kubernetes
events.
If
that
ends
up
being
the
thing
and
then
more
outputs
syslog
is
somewhat
used,
but
it's
not
universally
used.
It
would
be
nice
to
have
a
small
set
of
outputs
that
could
service
as
many
people
as
we
can,
and
with
that
I
ask
if
there's
any
questions.
E
Thank
you.
Can
I
ask
a
question,
or
was
there
already
one.
E
Thanks
so
I've
been
thinking
about
this
logging
that
we
do
with
irene
right
in
the
picture
and
on
kubernetes,
and
I
still
I
I
want
to
ask
again
just
to
make
sure
that
I'm
not
missing
anything
obvious.
So
the
logging
problem
and
the
matrix
problem
is
something
that's
shared
itself.
E
So,
presumably
we
want
to
overlap
as
much
as
we
can
with
what
kubernetes
is
doing
under
us
and
try
to
minimize
the
the
amount
of
new
mechanisms
we
bring
in
right.
So
I'm
wondering
have
you
thought
about
that,
and
do
you
think
it's
absolutely
necessary
that
we
bring
in
something
like
fluently
all
the
time,
given
that,
in
some
cases,
for
example,
on
gke
there's
already
a
fluid
there.
B
That
is
something
I
think
we've.
We
have
considered
one
of
the
big
things,
even
though
there
is
a
a
fluency
on
that
comes
with
some
clusters.
It's
it's
in
a
lot
of
ways
not
configured
to
work
with
the
or
consider
the
architecture
of
cf.
One
of
the
big
use
cases
is
blackhash,
which
does
need
to
get
application
logs
in
it
to
per
serve
cf
locks.
B
B
I
I
think,
there's
a
very
cff
centric
lens
that
people
want
to
see
their
logs
in
that,
I
think,
is
often
not
served
by
a
generic
kubernetes
pattern.
E
All
right,
so
the
difference
that
I
see
with
cf
logs
versus
cube
control
locks
is
that
cube
control
just
gives
me.
You
know
standard
air
standard
out
and
cf
logs
aggregates
some
stuff
right,
it
aggregates
diego
cella,
but
cloud
controller
output
right
everything,
that's
pertinent
to
my
app
so
you're
saying
that
is
this
the
feature
that's
missing
from
the
kubernetes.
B
I
wouldn't
say
it's
necessarily
missing
from
the
kubernetes
layer.
It's
not
like.
I
I
think.
As
far
as
my
understanding
goes,
while
there
is
a
very
generic
kubernetes
way
to
look
at
logs
and
I
think
that's
valuable
in
a
lot
of
sense,
people
often
are
using
cf
because
they
don't
want
to
use
just
kubernetes
and
well
it's
a
tool.
Kubernetes
is
a
great
tool
for
us
to
leverage
going
forward
in
like
to
empower
cf.
I
think
it's
something
that
there's
a
lot
of
difficulties
in
just
looking
at
it
from
oh.
B
This
is
a
kubernetes
just
kubernetes,
because
of
those
previous
reasons,
I
think,
while
we're
moving,
I'm
I'm
sorry,
I'm
just
look.
Give
me
a
sec.
B
I
think
that
making
sure
that
we
both
expose
logs
as
far
as
egress
pattern
and,
while
that,
to
some
degree
exists
for
gk
people
often
want
to
have
or
various
other
kubernetes
architectures
people,
often
aren't
as
aware
of
like,
where
that's
going
or
aren't,
as
in
control,
at
least
to
my
understanding
of
where
those
logs
go
and
that's
something
that
we
are
very
aware
for
at
least
on
some
subset
of
people
using
cf
absolutely
needs
to
be
there.
But
I
think
as
well.
B
The
ability
when
you're,
deploying
your
own
fluency
or
when
you're
deploying
your
own
logging
egress
pattern
to
very
much
put
that
lens
of
cf
in
front
of
the
log
data.
I
think,
provides
a
lot
of
value
that
you'd
miss
by
just
looking
at
logs
from
standard
out
and
saying.
This
is
what
logs
are
logs
from
standard
out
on
kubernetes.
E
Yeah,
sorry,
I
I
agree
with
that
100.
I
I'm
not
sure
if
I,
if
I
explain
my
my
worry
clearly
enough,
yeah,
I'm
just
thinking
you
know,
there's
a
way
to
stream
logs
in
kubernetes.
It
doesn't
feel
like
we're
taking
advantage
of
that
at
all,
but
we're
building
a
system.
E
That's
parallel
to
that,
and
you
know,
I
assume
that
if,
if
someone
is
going
to
deploy
cloud
foundry
on
kubernetes,
it
might
not
be
the
only
workload
that
they
have
and
they
will
have
tooling
to
consume
logs
and
stuff
from
you
know
from
other
stuff,
that's
running
on
kubernetes
and
their
apps
are
also
going
to
be
on
kubernetes.
So
I
I'm
thinking.
Is
there
a
way
to
offer
all
of
these
capabilities?
A
For
sure,
no
to
provide
others
with
some
context
here
I
just
want
to
interject
that
irene
x
implements
a
logging
interface
for
cube
cf,
which
is
different
right.
It's
not,
and
it
was
designed
to
leverage
the
kubernetes
logging
infrastructure
as
thoroughly
as
possible
and
do
as
little
as
possible.
Can
you
explain
how
that
is
different
and
how
you
think
that
might
be
used
vlad?
E
So
for
that
ironyx
component,
we
just
left
the
older
logging
system
in
place
and
we
removed
the
logging
agent
and
well
removed.
There
are
no
more
diego
cells
right.
So,
instead
of
having
that
logging
agent
on
the
diego
cells,
we
wrote
an
irini
x
component
that
acted
as
a
bridge
and
acted
as
a
logging
agent
that
sucked
in
logs
from
the
kubernetes
api
and
just
stream
them
over
to
doppler.
E
A
E
Yeah,
so
in
that
setup
we
don't
lose
anything
everything
is
preserved,
but
it
does
have
drawbacks
right.
It
does
need
the
entire
logger
gator
system.
E
That's
you
know
that
we
I
mean.
I
agree
with
the
presentation
that
we
probably
want
to
move
away
from
that
system
in
the
kubernetes
world,
but
yeah.
B
So,
given
that
you're
streaming
for
it
directly
from
the
api,
it
doesn't
have
to
be
deployed
on
the
same
cluster
as
the
workloads
are
on,
is
what
we're
saying
right.
E
Actually,
it
doesn't
that
that's
true,
I
we
haven't
tested
anything
like
that,
but
I
guess
that
would
be
true,
but
it
is
in
cluster.
So
you
deploy
that
component
next
to
everything
and
it
just
watches
to
see
when
new
irene
apps
get
deployed
and
then
just
stream
streams
logs
whenever
they're
needed.
B
I
think
in
a
lot
of
ways
we
think
of
what
we've
built
on
cf4
gates
as
similar
to
that
bridge
or
similar
to
the
concept
of
that
bridge.
I
think,
while
we
haven't
used
kubernetes
tail
or
any
of
those
sorts
of
things,
we've
we're
specifically
more
doing
the
more
direct
method.
B
It
is
something
we've
considered,
but
it's
been
closer
to
patterns
that
we
use
development
have
the
as
the
development
team
have
have
experiences
used
in
the
past,
and
so
that's
the
reason
for
those
current
patterns
as
they
exist
right
now.
I
think
if
we
had
more
pressure
to
deploy
outside
the
cluster,
we
might
be
looking
at
different
methods.
C
Things
whilst
also
listening
to
the
the
conversation,
but
I
guess
that
seemed
like
the
logical
way
of
maybe
addressing
what
vlad
was
saying
of
being
able
to
pipe
stuff,
both
through
cube's
native
logging
infrastructure
and
then
having
the
cf
stuff
on
top,
so
that
you
could
get
it
out
in
both
ways.
If
you
wanted
to
and
were
willing
to
take
that
performance
hit.
B
I
think
they
might
be
in
both
places
already,
at
least
in
terms
of
like
how
we're
getting
things
from
annotated
pods.
They
should
just
always
be
going
to
standard
out
and
still
be
going
to
standard
out.
The
question
is
probably
injected
logs,
which
might
not
be
instead
for
that,
but
we
should
be.
We
might
be
able
to
consider
changing.
F
Cool
thanks,
I
know
sort
of
the
pattern
we've
seen
also
on
on
the
metric
side.
Is
you
know,
we've
kind
of
I
think
both
the
logs
and
metrics
we've
started
from
this
place
of
trying
to
match
the
existing
app
developer
api
so
that
we
can
support
the
cfcli
and
various
clients
that
app
developers
might
have
cf
logs
being
kind
of
the
popular
one,
but
you
know
there's
other
ways
to
even
look
at
logs
and
metrics,
using
cf
and,
and
that
has
kind
of
opened
up
all
these
other
opportunities
adjacent.
F
Where
we
can
continue.
You
know
we
have
fluent
d
running,
we
we
can
provide
more
features
from
it.
You
know
similar.
We
we're
kind
of
running
into
similar
things
on
on
the
metric
side,
where
having
matching
those
apis,
it
doesn't
exclude
you
from
doing
any
of
the
kubernetes
native
things.
Those
are
all
still
available
to
you,
but
we
now
have
components
that
can
provide
some
operator
features
as
well.
D
A
Any
other
questions
this
has
been
really
really
interesting
for
me,
but
I
don't
know
what
further
questions
to
ask.
I
suppose
we'll
make
these
slides
available
for
people
with
the
notes.
As
ben
mentioned,
I'm
not
going
to
try
and
summarize
this
I'll
in
the
in
the
meeting
notes
and
I'll
just
sort
of
point,
a
reference
to
the
material.
A
We
have
time
now,
so
we
only
had
one
presentation
for
this
this
session.
I
would
advise
people
to
check
the
cf4k8
slack
poll
for
the
next
session.
I
think
we
have
two
items
which
I
don't
have
in
front
of
me
for
consideration,
but
I
think
the
coolest
one
is
yuri's
idea
that
we
should
actually
take
a
chunk
of
time
in
the
next
cf4k8
sig
or
kubernetes
sig,
and
do
a
brainstorm
on
topics
for
for
a
few
of
these
meetings
out.
A
So
that
will,
I
think,
happen
next
meeting
so
we'll
bring
your
ideas
and
bring
your
friends
bring
people
who
you
think
might
have
an
interest
in
steering
this
conversation
or
adding
to
this
conversation
and
and
we'll
we'll
get
some
ideas
down
for
for
subsequent
meetings
so
that
we
can
talk
about
the
future
ui.
D
D
I
think
they
put
out
a
proposal
recently
and
thought
that
might
be
interesting
for
them
to
present
or
just
for
folks
to
discuss.
A
And
I
think
there
can
be
definitely
room
for
that
as
well.
In
an
hour,
we
could
probably
do
both
things,
brainstorm
ideas
and
hear
an
overview
of
that
proposal,
which
definitely
needs
to
be
talked
about.
It
was
great
that
it's
I've
seen
it's
gotten
a
lot
of
feedback
on
the
on
the
document,
but
it
looks
complex.
I
haven't
got
my
head
around
it.
A
A
Returns
any
other
topics
of
interest
or
urgency
to
our
community
that
we
want
to
bring
out
while
we're
here.
I
just
want
to
just
open
the
floor.
A
Okay,
well,
it's
been
swell.
Thank
you,
so
much
ben
for
that
presentation.
Thanks
for
questions
and
comments,
let's
go
see
if
we
can
tune
into
some
kubecon
sessions
today
in
the
midst
of
our
other
stuff.
So
thanks
very
much
everyone
for
coming.