►
From YouTube: CF for K8s [March 2020]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
yeah,
right
sorry
for
that
still
figuring
out
my
home
office
setup.
Oh.
C
But
really
the
important
thing
for
both
operators
and
components
and
CF
for
kate's
is
that
we
do
not
plan
to
bring
the
logger
Gator
system
for
metric
transport
and
we
are
working
with
the
CF
for
Cates
teams
to
help
them
instrument
their
workloads
than
Prometheus
and
want
to
really
enable
teams
to
understand
how
to
do
that.
What
that
means
and
what
the
implications
are
so
I
think
this
is
a
good
forum
to
start
start,
that
discussion.
B
And
I
remember
you
had
mentioned
to
me
that
some
of
the
existing
components
like
CF
and
UAA
they
might
be
fairly
easily
positioned
to
move
to
that
convention
for
metrics,
because
they've
already
been
using
things
like
stats,
D
and,
at
least
in
the
short
term,
there's
some
easy
translators
that
would
adapt
that
protocol
to
these
Prometheus
patterns.
Yeah.
C
That's
those
are
good
examples,
UAA
and
stats.
The
UAA
and
Kathy
both
use
the
stats,
D
format
and
the
the
term
in
in
the
Prometheus
project
for
converting
metrics
is
called
an
exporter.
So
there's
a
popular
stats,
the
exporter
which
will
live
in
your
pod
and
listen
on
the
UDP
port
for
stats
D
and
then
convert
those
stats,
D
metrics
into
the
Prometheus
format.
For
those
that
don't
know.
C
For
me,
there
is
a
big
there
is
a
big
mental
shift
here
for
consuming
metrics,
which
is
that
we
are
transitioning
from
a
push
model
to
a
pull
model.
So
the
development
of
something
like
a
nozzle
is
is
definitely
different
and,
like
I
said
we
we've
we've
started
to
see
that
challenge
in
CF
deployment.
It's
it's.
It's
pretty
hard
to
get
people
to
talk,
Rock,
switching
from
push
to
a
pull
and
CF
deployment.
C
It's
a
little
bit
easier
in
kubernetes,
because
all
of
the
endpoints
are
defined
in
pods
which
live
in
the
kubernetes
api.
So
service
discovery
is
a
little
bit
easier
leveraging
the
kubernetes
api,
but
it
still
does
put
the
onus
on
the
pull
mechanism
to
retrieve
the
metrics
and
and
to
help
with
that
we
are
looking
at
including
a
Prometheus
server
in
CF
for
gates.
It
is
popular
for
vendors
to
extend
based
on
top
of
the
Prometheus
server.
C
Candidly,
a
lot
of
our
knowledge
is
pretty
theoretical.
You
know,
we've
been
reading
best
practices.
We
know
sort
of
like
how
to
do
this,
we're
working
with
Cappy
to
kind
of
get
a
first
team
through
the
process
of.
What's
it
look
like
to
do
what
should
be
an
easy
exposition
like
I
said
stats,
D
has
a
popular
exporter.
C
We
know
you
know.
Kathy
has
metrics
that
operators
are
interested
in.
Additionally,
it's
not
uncommon,
there's,
there's
probably
around
150
exporters
and
and
like
there's
exporters
for
my
sequel
and
Postgres
so
like
in
the
case
of
Cappy.
We
may
also
help
to
export
metrics
from
the
backing
database
as
well,
but
it's
something
that
our
our
team
wants
to
really
work
as
an
enablement
team
around
these
are.
D
C
Shouldn't
have
to
they.
Their
workload
should
be
the
same
between
Bosch
and
kate's.
Their
workload
will
continue
to
instrument
in
stats,
D
I'm,
expecting
unless
they
decide
to
make
a
change
and
the
you
know,
I
think
they
will
serve.
The
scope
of
work,
for
them
is
to
include
a
little
bit
of
extra
yeah
Mille
to
say,
hey.
We
need
another
container
running
in
our
pod,
which
is
this
stats,
the
exporter.
C
They
may
have
a
little
bit
of
like
versioning
and
stuff
to
make
sure
that
they
like
stay
up
with
that
exporter
from
its
from
the
Prometheus
project,
and
in
that
case
in
particular,
I
will
say,
like
exporters
do
get
they
get
to
be
a
little
bit
more
community
driven.
So
we
may
find
at
times
that
if
you
have,
for
example,
something
like
a
Postgres
export
or
my
sequel
exporter,
that
may
not
come
from
the
Prometheus
project
and
may
come
from
a
more
obscure
source.
C
So
there
may
be
times
when
the
exposition
itself
does
require
teams
to
think
about
owning
a
little
bit
of
code
there.
But
again,
that's
that's
where
we
can
help
we
want
to.
We
do
want
to
kind
of
think
of
it
more
as
enablement
than
we
haven't
in
the
past
of
just
saying:
hey
give
us
these
gr
PC
envelopes
and
we'll
take
it
from
there.
C
It
is
a
little
bit
more
on
the
team
to
think
about
what
format
of
metrics
are
they
using
and
how
do
they
convert
that
to
Prometheus,
or
should
they
consider
instrumented
natively
in
Prometheus,
so
like
I,
said
well,
we'll
be
there
to
help
with
that
enablement,
but
it's
a
pretty
rich.
So
if
you're
already
instrumented
in
some
way,
chances
are
there's
tooling
to
help
you
get
to
previous
make.
E
A
on
a
much
smaller
scale,
I
think
we've
recently
done
a
similar
thing
for
for
Bosch
itself,
like
our
internal
Bosch
team
has
been
trying
to
get
as
many
metrics
as
we
can
out
of
Bosch,
putting
a
telegraph
exporter
on
the
side
and
just
trying
to
funnel
it
all
through
a
single
thing.
I
think
they
haven't
switched
to
full
based
metrics,
yet
we
are
still
using
push
model,
but
this
is
certainly
an
option
like
once
you
have
this
generic
stack
up
and
running
yep.
E
C
Yeah
I
think
we've
seen
similar
usage
of
Telegraph
on
pivotal
container
service.
We
use
Telegraph
to
convert,
pull
to
push
like
I
said.
A
lot
of
partner
is
V,
partners
are
and
really
a
lot
of
sort
of
enterprise.
Network
setups
are
not
necessarily
easy
to
implement
a
pull
from
outside
the
subnet.
So
having
push
is
still
a
really
valuable
integration.
Point
and
Telegraph
is
a
great
open
source
agent
and
tool
to
take
the
pull
metrics
convert
it
to
a
push.
E
I'm
regarding
the
the
document
you've
said
with
the
community,
I
haven't
seen
too
much
interaction
with
it.
So
far,
like
I,
don't
see
like
comments
on
the
documents
or
discussions
happening.
Is
this
due
to
the
fact
that,
like
within
VMware,
everything
is
perfectly
aligned?
Is
it
due
to
like
people,
not
caring
about
also
the
ability
and
striving
with
much
more
fundamental
issues
like?
What's
your?
What's
your
feeling,
why
why
there's
little
feedback
coming
in
or
did
you
even
receive
feedback
on
on
other
channels?
Yeah.
C
C
You
know
helping
helping
them
anticipate
in
need,
they
haven't
actually
encountered
yet
so
I
feel
like
it's
a
little
bit
of
a
concern
that
teams
are
kind
of
used
to
having
taken
care
of
for
them
so
hoping
to
get
some
some
more
interaction
but,
like
you
said,
I,
think
a
lot
of
teams
are
it's
not
top
of
mind
for
them,
so
we're
starting
to
set
up
some
calls
and-
and
you
know,
spend
some
time
with
them.
I
mean
the
kpac
team
is
a
good
team.
They
have
been
really
thought
about
metrics
at
all.
C
F
C
Yeah
CCF
is
maybe
in
a
little
bit
of
a
middle
ground
where
you
all
have
vlogger
gator
included,
like
I,
said.
Logger
reader
has
over
time
included
exporters
so
that
metrics
that
are
instrumented
through
a
logger
Gator
envelope
can
be
also
exposed
through
a
prometheus
endpoint
I
haven't
thought
too
much
about
what
it
would
take
to
kind
of
get
that
route
going,
but
it
should
be
a
possible
transition
there
as
well.
E
I
think
this
whole
metrics
topic
also
came
up
in
the
context
of
like
how
would
the
metrics
API
for
consumers
change.
How
would
individual
metrics
change
and
so
on?
So
there
might
be
several
projects,
for
example,
the
auto,
like
application,
autoscaler
project,
currently
relying
on
certain
metrics
being
provided
with
certain
meaning
Heiser
vestra.
So.
E
C
There
is
the
4-hour
CF
for
Kate's
work
and
sort
of
the
CLI
experienced.
We
are
matching
the
existing
Cappy
stats
endpoint,
so
that
provides
what
are
usually
referred
to
as
the
container
metrics
in
CF
and
it
uses
the
log
cache
API
formats
and
we
have
a
new
component
that
is
a
for
Kate's.
Only
component
called
the
metric
proxy
and
the
metric
proxy
takes
requests
from
the
stats
endpoint.
Essentially,
it
then
goes
and
scrapes
the
cubelet
for
those
metrics
converts
them
to
the
log
cache
format
and
kind
of
acts
as
a
translation
layer.
C
So
the
container
metrics
themselves
are
going
to
continue
to
match
the
existing
stats
API
and
that
that
probably
covers
some
cases
for
autoscaler,
but
also,
probably
not
all
I,
know.
Autoscaler
is
sort
of
a
piece
of
functionality
where
each
vendor
kind
of
does
it
a
little
bit
different
and
that
will
maybe
continue
to
be
the
case.
C
Kubernetes
has
some
auto
scaling
tools
and
things
that
we
can
leverage.
Heaven
I
saw
that
note
in
the
in
the
agenda,
so
I've
been
researching
that
a
little
bit
just
in
preparation
for
this
call,
but
I
haven't
thought
through
too
much
sort
of
exactly
how
we
want
to
bring
that
to
the
community.
Like
I
said,
there's,
it
probably
gets
into
some
vendor
specific
implementations
around
how
we,
how
we
do
that,
but
the
container
metrics
is
something
we're
going
to
keep
the
existing
api's
on
and
match
them,
as
as
the
log
cache
API.
D
And
would
that
loCash
api
be
the
expected
end
point
for
some
autoscaler
peat
and
open
source
event
a
specific
one
to
get
the
metrics
one?
Or
would
it
be
expected
that
you
go
directly
to
the
kubernetes
api,
or
would
it
be
expected
that
you
maybe
rely
on
a
promiscuous
being
there
and
being
available
in
basically
every
distribution,
and
you
could
also
go
to
that
promiscuous
yeah.
C
That's
a
good
question:
I
think
that
a
basic
autoscaler
could
use
that
existing
law.
Cache
API,
really
the
stat
endpoint
I
think
is
the
is
the
endpoint
that
a
community
autoscaler
would
be
likely
to
use
and
that
because
we
match
the
API
I,
don't
see
any
reason
why
that
couldn't
be
brought
over
as
sort
of
a
lift
and
shift
component.
I.
C
Think
if
you
wanted
to,
if
vendors
want
to
do
an
implementation
that
provides
auto
scaling
on
metrics
outside
of
container
metrics
component
metrics
or
custom
map
metrics
that
we
probably
will
take
the
more
kubernetes
native
or
out-
and
you
know,
extend
the
horizontal
pod
autoscaler
and
the
existence
of
either
Prometheus
server
or
there's
a
component
called
the
metric
server
as
well,
and
the
kubernetes
ecosystem,
like
I,
said,
haven't
thought
through
too
much
of
what
that
might
look
like.
But
that
would
be
kind
of
the
approach.
I
would
expect.
C
Is
out
for
CF
for
gates,
the
metric
store
does
interrupt
with
prompt
QL,
so
our
feeling
is
that
a
Prometheus
server
gives
sort
of
the
equivalent
functionality
for
dashboarding
and
alerting
and
has
a
little
bit
of
additional
functionality
around
Federation.
So
that's
kind
of
our
approach
in
the
open
source.
We
are
leveraging
some
of
that
work
on
the
metric
store
for
some
of
our
commercial
products,
but
in
the
open
source.
We
think
that
CN
CF
project
Prometheus
is
probably
the
expected
open
source
route
to
take
and.
A
C
In
the
early
stages,
we
are
near
complete
we're
in
the
early
stages
of
the
component
metrics
into
the
Prometheus
server,
and
you
know
that
dock
is
is
put
out
there
to
really
start
that
enable
man
with
the
with
each
of
the
teams.
We
are
near
complete
and
integrated
on
that
metric
proxy
component,
so
that
the
existing
stats
endpoint
replicates
those
container
metrics
and
are
accurate
to
what
is
from
the
cubelet.
C
A
B
A
A
That
essentially
means
we
have
a
situation
that
we
didn't
have
before,
which
is
like
the
meeting
is
not
coming
to
an
end,
but
we
could
still
in
fact
look
at
the
voting
and
pick
a
second
of
it.
As
it
looks
like
now,
we
have
all
of
your
future
gaps
between
CFO,
Bosch
and
CFO
gates,
1.0
heavily
voted
by
the
former
Bosch
folks
from
s
AP,
I
guess.
E
G
E
I
guess
the
timeline
question
is
more
something
for
for
a
sigh
Eric
and
the
VM
er
folks,
I
just
heard
a
1.0
being
referenced
like
in
the
previous
meetings
at
some
point,
so
I
guess
whatever
we
want
to
declare
at
whatever
point
in
time.
At
one
point,
oh
like
what
would
be
the
gap
but
like
timeline
is
also
a
very
fair
question
concerning
the
same
thing.
B
Definitely
we
have
some
leeway
in
deciding
what
we
want
a
1.0
to
mean
for
CF
frigates,
I,
don't
think
that'll
be
full
feature
feature
parity
with
Bosch
and
that
that
honestly
will
be
distinct
from
whatever
a
commercial
distribution.
Vmware
is
providing
on
top
of
that.
So
those
aren't
necessarily
coupled
together,
because
that's
personal
distribution
versus
the
open
source
project,
so
I,
don't
think
you
know,
as
we've
been
getting
CF
rickets
just
to
come
together
and
to
get
the
basic
experience
working.
F
This
will
bring
up
the
fact
that
we
have
to
reconsider
certification
right
because
we're
going
to
have
a
couple
of
ways
to
deliver
this.
At
least
this
you
know
even
up
to
the
end
of
this
year,
with
cube,
CF
and
CF
4k
8
is
one
going
to
be
a
certified
distribution
or
both
going
to
be
a
certified
distribution,
and
so
what
will
be
the
criteria
for
feature
completeness
like
the
whole
whole
purpose
behind
CF
certification?
F
Is
that
you
know
if
you
push
things
to
1
CF,
it's
going
to
behave
the
same
way
as
if
you
push
it
to
another,
so
it
there's
the
vendor
independence.
There
do
we
have
to
start
looking
at
how
we
define
I
mean
this
is
I,
know
I
brought
this
up
before,
but
should
we
should
we
look
at
test
based
criteria
for
certification,
as
opposed
to
code
based
certificate
criteria
for
certification?
F
Of
work
I
totally
recognize
that
I
think
that's
why
we
haven't
done
it
so
far.
Yeah
I
think
that's
why
certification
has
been
chipped
this
code,
because
we
haven't
got
enough
coverage
to
really
define
the
certification
like
as
explicitly
as
just
shipping.
You
know,
you
know
what
I
mean
yeah,
but
I
think
we
we
will
have
to
do
that.
Chris
at
some
point,
start
building
the
certification
around
acceptance,
testing
of
some
kind
or
automated
test.
F
F
And
and
but
there,
but
we
have
to
be
a
little
more
I
think-
and
this
is
speaking
secondhand
because
I
don't
deal
with
cats,
a
lot
glad
you
could
jump
in,
but
we
felt
that
there
isn't
quite
enough
coverage
there
too,
and
they're
they're
a
little
bit
loose
and
they
we've
added
additional
tests
for
our
own
verification
of
cube,
CF
and
I.
Think
it's
time
to
put
them
all
into
one
place
where
we
can
agree
on
what
the
desired
behavior
is
I.
E
E
Let's
build
a
test
suite
to
like
put
behavior
based
tests
in
there
and
define
what
OpenStack
certified
means
and
to
cut
a
long
story
short
in
the
end,
like
almost
nothing
was
in
there,
because
people
could
not
agree
on
what
it
means
to
be
an
open
stack
distribution
and
the
only
tests
in
there
we're
ridiculously
are
covering
only
like
very
little.
Things
of
the
API
like
hey,
create
a
VM
delete
the
VM
right,
maybe
attach
to
disk.
F
Much
more
reasonable
people
I
think
we
just
kind
of
trust
each
other
not
to
be
idiots
about
it.
I
I'm,
pretty
sure
everyone
I've
talked
to
you
guys
as
a
pretty
a
pretty
consistent
idea,
so
I
may
be
maybe
I'm
just
being
optimistic,
but
I
think
we
won't
have
the
same
problem
because
we
at
least
do
have
cats
as
a
starting
place.
F
B
So
I
mean
kubernetes
has
done
something
similar
in
terms
of
having
conformance
tests,
and
that
seems
like
it's
been
fairly
broadly
useful
across
the
breadth
of
kubernetes
distributions.
Marco,
do
you
have
a
sense,
give
any
context
on
that
or
like
how
it
compares
to
the
kind
of
failed
effort
on
the
OpenStack
side.
E
B
Think
the
flipside,
some
of
the
like
kubernetes
capabilities,
the
VMware,
has
been
baking
into
vSphere
7s.
What
part
of
what
they're
calling
Project
Pacific
they've
failed
some
of
those
conformance
testing,
I,
think
relating
to
volume,
attachments
or
something
so
there's
got
to
be
some
meat
there.
If
there's
something
to
fail.
I.
C
Think
the
project
Pacific
is
sort
of
known
failures
because
it
doesn't
use
the
cubelet.
It
uses
a
different
component,
but
I
think
it's
generally
been
I,
know
I'm,
pivotal
container
service.
We've
found
it
useful
bar,
although
admittedly,
we
kind
of
like
do
it
at
the
last
minute
to
be
like.
How
are
you
passing
the
conformance
test
before
we
release
it,
but
it
is
it.
It
is
also
kind
of
just
to
get
the
stamp.
Let's
say
we're
conformant
I,
don't
I've,
never
heard
of
like
specific
functionality
we
missed
or
caught
through
the
conformance
test
same.
F
Same
with
Sousa
cast
cat's
platform,
it's
it's
something
we
hope
we
go
through
to
get.
The
certification
never
seems
to
catch
anything
catch
us
out,
but
what
that
might
mean
it's
we're
just
lucky,
or
it
might
mean
that
it
doesn't
cover.
Quite
enough
I
say
my.
My
I
have
very
few
distribution
related
problems
going
from
one
kubernetes
to
the
other,
because
we
test
on
several
different
communities
platforms.
So
maybe
it
is
doing
its
job.
H
On
the
just
takes
part
of
Marco
asked:
who,
how
would
we
we
will
be
document
this?
Will
this
be
owned
by
CFO
case
or
each
team
will
have
it
in
terms
of
the
differences
the
the
we
talked
about
this
within
the
team,
so
goals
were
thinking
of
using
either
known
issues
or
some
documentation
to
highlight
the
changes,
differences
actually
between
between
the
bar
old
and
the
case?
E
H
H
F
H
F
H
A
So
any
last-minute
questions
to
the
current
topic
before
I
also
need
sir
to
drop
off
so
I
guess
we
can
keep
it
a
little
bit
shorter
today,
but
I
think
we've
managed
to
to
cover
two
topics.
So
I
guess
call
to
to
this
round
to
submit
new
topics
to
to
actually
talk
about
because
there
I
guess
back
to
to
for
now
so
feel
free
to
add
your
own
choices
to
the
poll
and
see
them
getting
a
poll
that
hopefully
so
with
that.
If
there's
no
last-minute
topics.