►
From YouTube: 2021-05-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Let
me
share
the
notes.
B
B
Okay,
so
I
think
let's
get
started
we're
about
three
minutes.
Past
koi
and
I
rested
you
want
to
do
a
quick
walkthrough
of
the
state
for
stateful
set
support
that
we've
added
to
the
hotel
operator
and
let's
walk
through
that.
Do
you
want
to
share
your
screen.
C
C
Yes,
sir,
does
iris,
do
you
want
to
go?
First,
I'm
still
setting
up
my
cluster
okay.
D
Okay,
I
can.
I
can
start
first.
D
D
Okay,
so
I
will
share
our
test
for
the
sniffle
set
first.
D
Okay,
so
I'm
going
to
talk
about
the
cuddle
test,
which
is
the
end-to-end
test
framework.
We
are
going
to
use
to
test
the
functionality
of
our
operator
in
creating
the
staple
set,
so
the
cuddle
is
the
kubernetes
test
tool,
which
is
a
tool
kit
for
us
to
writing
the
test
for
the
kubernetes
operators
and
the
reason
why
we
use
cuddle
is
that
first,
it
provides
us
with
an
end-to-end
test
environment.
D
We
will
test
it
in
kind
of
kubernetes
clusters
and
it
will
help
us
to
ensure
the
entire
process
of
creating
the
staphon
set
vectors
from
the
custom
resource
is
correct
and
the
reason
is
that
we
are
allowed
to
use
the
kind
cluster,
so
it
is
isolated
from
real
cluster.
We
don't
have
to
create
real
clusters
for
the
testing,
and
another
reason
is
that
it's
easy
to
use
all
the
tests
and
files
are
in
yaml
cell
and
we
don't
need
to
write
extra
go
code
or
testing
and
those
yama
files
are
very
declarative
themselves.
D
D
D
So
for
each
of
the
test
cases,
the
installation
yama
file
will
be
executed
first
and
then
the
framework
will
get
the
kubernetes
objects
from
the
cluster
and
verify
that
they
will
meet
the
expectation
we
specified
in
the
assertion
file.
So
let's
take
a
look
at
the
at
the
install
yaml
file,
so
this
is
the
standard
yaml
file.
We
are
writing.
We
wrote
using
the
operator
custom
resource
definition.
D
We
developed
to
start
a
custom
resource
instance,
so
here
in
our
case,
we
can
see
that
we
set
the
mode
to
this
default
set
and
we
set
the
we
set
the
replica
to
b3
here,
and
the
volume
volume
mounts
and
volume
clean
templates,
which
are
specific
for
the
stateful.
D
Set
and
the
next
step
would
be
the
test
assertions,
so
this
defines
the
final
state
we
expected
after
we
after
the
test
after
the
test,
the
custom
resource
instance
started,
so
we
assert
the
field
that
we
care
about
and
we
will
delete
the
others,
and
we
can
see
here
that
the
name
is
stayfulset
plus
the
dash
collector
and
also
it
includes
all
the
other
fields
we
care
about,
like
the
replicas
and
the
volume
clean
templates
volume
and
volume
mounts
yeah,
so
that's
the
basic
structure
of
our
test
and
for
the
stable
set.
D
D
So
so
this
is
the
test
I
just
ran
before
before
the
presentation,
and
we
can
see,
I
simply
enter
the
make
e
to
e
to
make
the
test
run
and
we
can
see
that
it
passes
all
the
other
other
four
test
cases.
The
staffordshire
features
and
smoke
stay
full
set,
as
I
just
mentioned,
and
here
we
can
see
that
this
is.
This
is
the
log
for
the
test
and
it
shows
every
step.
How
is
creating
the
collectors?
D
The
pods,
and
here
we
can
see
that
we
have
three
three
parts
in
here.
This
default
set
collector
just
as
we
stated
in
the
install
mmo
file.
There's
gonna
be
three
replicas
yeah,
so
so
that's
basically
my
part.
The
for
the
kodo.
D
B
Okay,
so
thanks
iris:
where
are
you
okay?
Are
you
ready.
C
Yeah
so
yeah
so
iris
explained
the
testing
that
we
added
to
the
staple
set
operator
now
that
we
have
enhanced
it,
and
I
can
show
what
the
staples
operator
would
look
like
if
you
deployed
it
in
your
kind
of
like
local
cluster
and
want
to
create
a
staple
set
resource
there.
C
So
I'll
share
my
screen
real,
quick
iris.
Can
you
let
me
share
my
screen.
C
It
says
I
can't
share
while
you're.
C
Okay,
so
here
I
have
a
mini
cube
cluster
and
in
my
mini
cluster
I
have
the
cert
manager
that
iris
showed
as
well
as
the
open
telemetry
operator.
So
I
could
show
that
by
doing
the
cube
code
I
get
all
you
see.
I
have
the
open,
telemetry
operator
and
cert
manager
deployed
in
my
mini
cube
cluster,
and
so
this
is
a
basic
setup
of
having
the
operator
installed
into
your
new
cluster.
C
And
now
I,
if
I
wanted
to
create
a
safety
set
resource,
I
would
apply
use
a
coupe
cuddle,
apply
command
using
a
gamma
template,
so
it's
very
similar
to
the
yellow
template
that
iris
provided.
As
you
can
see
here,
I
have
a
open
challenge
collector
and
its
name
is
stateful
and
the
new
mode
that
we've
added
is
a
staple
set
mode
which
creates
a
stable
set
resource,
and
I
provide
its
own
volume
mount
as
well
as
this
new
added
field
that
we
add
to
the
crd
or
the
custom
resource
definition.
C
C
C
As
you
can
see,
it
said,
staplecollector
um0
is
now
running
and
you
note
the
zero
as
iris
over
for
tests.
That
is
due
to
the
ordinal
numbering
in
staple
sets,
and
if
I
do,
a
q
code
describe
stateful
set,
then
you
can
see
here
that
it
was
created
with
the
volume
claim
that
I
specified
in
the
ammo
test
volume
standard.
C
One
gigabyte
also
has
the
specified
volume
mounts,
and
so
this
is
an
example
of
how
you
could
create
your
own
staple
set
using
the
operator
and
what
we've
added
to
the
yaml
to
be
able
to
do
so
as
well
as
what
it
would
look
like
if
you
were
to
run
it
in
a
local
cluster,
and
you
have
that
use
case
right
there.
So
that
is
pretty
much
it
for
the
staple
set
enhancement
to
the
operator,
and
we
are
welcome
to
kind
of
any
questions
right
now.
B
Thanks
way,
david,
I'm
assuming
you
had
already
done
an
initial
review,
but
again
as
we
make
progress,
you
know
we'll
we'll
be
filing
these
prs
on
the
operator,
but
also
then
starting
to
work
on
the
next
step.
F
G
I
have
a
question,
this
looks
cool.
Do
we
have
any
default
settings
for
the
volume?
Do
we
expect
people
to
set
like
everything
or
are
we
going
to
like?
Have
some
defaults
for
some?
Some
of
the
settings.
C
Yeah,
so
we
actually
do
have
a
default.
So
if
the
user
does
not
specify
a
volume
out
involving
cleaning
template,
then
it's
defaulted
to
one
that
we
have
within
the
code
that
just
uses
the
default
storage
class
and
just
pretty
much
just
a
normal
default.
They
don't
specify
one,
but
if
they
do
specify
one,
then
we
actually
replace
that
default
and
just
use
theirs.
E
Is
there
a
way
for
them
to
specify
that
they
don't
want
a
volume?
Then
not
right
now
is
there's
the
stateful
set.
You
use
a
deployment,
I
would
guess.
Well
yeah
I
mean
I
could
see
like
if
the
stateful
set
was
required
for
sharding,
for
example,
then
someone
might
want
to
do
that
without
having
persistent
storage.
But
yes,
we
can
also
cover
this
offline.
If
we
need
to.
G
B
Okay,
if
there
are
no
other
questions
for
now,
let's
move
on
to
the
next
topic.
This
is
a
topic
that
I
added
just
to
in
you
know.
Let
everybody
know
that
in
the
metrics
discussions
the
metrics
api
discussions
again,
there
has
been
back
and
forth
and
maybe
josh
can
also
provide
some
more
detail
here,
but
we
have
decided
to
you
know
default
to
the
prometheus
push
protocol
again,
not
not
the
pull,
which
is
what
it
was
earlier.
B
So
again,
that's
good
news
for
most
of
the
use
cases
that
are
actually
handling
push,
but,
however,.
H
I
don't
remember
this
decision
in
a
in
any
context
that
I've
been
present,
for.
I
was
wondering
if
you
could
say
more
about
this
decision
or
what
you
were
hoping.
B
Josh
again,
I
was
not
at
the
meeting
but
riley
I
was
just
conveying
what
riley
you
know
conveyed
okay
and.
H
I
think
what
was
said
was
actually
that
we
are
committing
to
the
idea
that
the
sdks
will
support
push
for
otlp.
B
H
What
you
wrote
as
push
through
the
prometheus
remote
right
path,
which
which
runs
into
this
problem,
that
we've
not
problem,
but
you
know
sort
of
challenge
that
involving
the
semantics
of
like
stainless
markers
and
late
arriving
data
and
and
and
all
that
stuff,
which
I
know
we're
we
intend
to
discuss.
All
of
us
would
like
to.
I
just
want
to
make
sure
we
weren't,
like
jumping
ahead
of
that
topic.
No.
H
This
is
good,
and
then
I
think
my
my
the
way
I've
been
thinking
about
this
is
that
we
have
sort
of
two
stages
of
of
explaining
this
to
the
world
right.
So
in
the
beginning,
people
will
understand
how
to
pull
data
from
prometheus.
H
It's
a
very
familiar
model
and
the
idea
of
pushing
otlp
will
start
with
just
cumulatives,
because
there's
no
translation,
that's
really
needed
in
the
code
path,
but
we
still
have
to
pass
through
a
prometheus
remote,
write
exporter
and
there's
some
question
of
whether
we
can
ever
get
a
semantically
correct
up
metric
or
the
stainless
markers
in
place
when
we
are
pushing
otlp
through
a
collector.
I
believe
that
that
can
be
done,
but
I
just
don't
think
it's
going
to
happen
in
the
next.
You
know
quarter
or
so
just
want
to
make
sure
that
that
was.
H
What
I
mean
by
this
is
that
we
just
have
more
specification
work
that
has
to
be
done
as
well
as,
like
I'd,
say,
a
proof
of
concept
for
the
the
collector
side
of
things
that
that
can
actually
do.
This
sort
of,
I
want
to
say,
semantically,
correct,
join
between
information
about
service
discovery
and
information,
that's
being
pushed
through
using
otlp,
and
I
I
think
that
that's
doable
and
the
reason
why.
H
But
I
don't
think
it
should
happen
as
fast
or
as
soon
as
just
getting
cumulative
to
work
or
just
getting
pulled
to
work
and
the
reason
why
there
are
I've
written
three
spec
prs,
they're,
just
pretty
rough
and
they're
all
new,
so
just
starting
to
get
reviews.
16,
46,
48,
49,
talking
about
some
of
the
sort
of
foundations
that
we
will
need
to
get
to
where
we
can
push
data
to
a
collector
and
have
it
come
out.
H
Looking
like
it
was
pulled
from
prometheus
because
it
will
have
put
in
those
up
metrics
and
the
stainless
markers.
So
if
this
interests,
anyone
the
spec
prs,
are
about
aligning
data
and
re-aggregation
and
staleness
and
stuff,
and
I'd
like
you
to
take
a
look
at
those
to
help
us
move
in
the
direction
of
supporting
push
properly.
Thank
you.
B
That's
great
josh,
thanks
for
the
contacts
can
you
just
share
the
pr
numbers
again
I'll.
I
B
G
Sense
here
push
with
otlp
to
collectors
right.
F
Think
to
make
sure
that
I'm
understanding
this
correctly,
we
we
expect
that
we
will
support
when
the
collector
is
initially
stable,
with
metrics
pulling
from
a
prometheus
endpoint
running
it
through
a
collector
pipeline
and
pushing
it
to
a
prometheus
remote
right.
F
We
expect
that
we
would
be
able
to
pull
from
or
receive
push
data
from
other
sources,
whether
that's
otlp
or
some
other
metrics
and
just
source
run
it
through
the
pipeline,
push
it
to
prometheus
for
remote
right,
but
we
may
not
have
all
of
the
up
and
stay
on
this
data,
and
we
will
we
have.
We
believe
we
have
a
path
to
get
to
that.
H
Yes,
that's
exactly
what
I
wanted
to
say
as
well,
so
that
initially
we
will
be
able
to
put
data
into
prometheus
remote
right,
but
it's
not
going
to
be
symmetrically
correct
by
prometheus's
definition,
if
it
doesn't
have
exactly
the
stillness
markers
or
exactly
the
up
metrics
that
we
that
we
should-
and
I
think
that
might
be-
maybe
contentious
to
that-
we're
like
in
a
state
where
we
can
produce
invalid
data
according
to
the
definition,
but
it
it
does
give
you
most
of
the
value.
H
I
think,
and
that
gives
us
a
path
where
people
want
to
see
the
solution
where
the
up
metrics
is
correct.
Eventually,
but
I
think,
as
many
may
have
experienced,
we
can
push
data
into
prometheus,
remote
right
and
see
data.
It's
just
that
it
doesn't
have
all
the
schematics
correct,
and
I
think
that
that's
where
we'll
be
for
a
moment,
if
you
end
up
using
otlp
to
push
and
then
it
turns
into
prw
it'll,
be
not
quite
correct.
Does
that
meet
everyone's
understanding.
G
H
Actually,
these
the
these
spec
prs
are
really
kind
of
as
far
as
I'm
concerned
a
foundation,
so
talking
about
re-aggregation
once
you
have
a
way
to
sort
of
very
clearly
define.
How
do
I
match
up
the
data
from
the
time
that
I
was
trying
to
push
the
data
with
the
service
discovery
state
from
that
moment
in
time,
and
then
I
can
do
the
up
marker
stuff
correctly,
but
it
involves
time
and
re-aggregation
and
stuff
and
temporal
alignment
anyway.
Those
are
those
pr's.
B
Yep
and,
and
to
that
point
emmanuel,
can
you
give
an
update
on
some
of
the
work
that
you've
been
doing
on
the
up
metric.
J
Yeah,
so
for
the
update
there
were
a
few
conversations.
Some,
I
think,
dashboard
raised
a
great
point,
which
is
that
right
now,
so
my
perception
was
that
and
from
you
know,
from
the
from
the
issues
description
was
that
we
we're
basically
implementing
from
the
receiver
as
a
prometheus
pass-through.
So
we
generate
the
up
metric
and
only
deliver
to
prometheus
exporters,
but
he
raised
the
point
that
we
shall
need.
J
You
know.
Other
exporters
might
need
that
have
metric
like
google
cloud
monitoring
might
need
that
app
metric.
So
the
trade
you
know
it's
a
really
simple
change.
I
made
the
change
locally.
I
just
haven't
pushed
it
up.
I
did
it
last
night,
but
that
actually
simplifies
things
a
whole
lot.
So
now,
just
the
app
metric
will
be
generated
for
everyone
yeah.
F
H
I
J
Yeah,
so
brian,
so
the
thing
is
prometheus,
doesn't
seem
to
push
out
the
up
metric
necessarily
like.
If,
if
you
try
to
scrape
directly
from
prometheus,
it
doesn't
expose
it
which
you
know
which,
which
was
what
led
me
initially
to
say,
hey,
you
know
what
make
it
only
a
metric,
that's
available
to
only
the
exporters
and
prometheus
itself
can
consume
it,
but
it
won't
export
it
out.
But
given
that
people
want
it
we'll
give
them
what
they
want.
C
I
G
Yeah,
I
don't
quite
understand
what
josh
was
talking
about.
That's
why
I
was
confused
initially,
but
we
were
seeing
scraper
collector
to
be
the
scraper,
so
thought
that
just
generating
up
metrics
for
now
at
least
is
the
is
the
way
to
go.
H
Everything
you've
all
been
saying
is
what
I
was
imagining
for
for
now
to
produce
up
metrics
is
that
we
would
produce
those
in
the
scraper
and
they
will
mean
what
they
were
supposed
to
mean.
It's
just
that
when
we
have
data
from
an
otl,
sdk
or
sdk,
that's
pushing
into
a
collector,
and
we
want
that
to
come
out
as
prw.
B
I
I
Great
just
moving
on
brian
just
an
important
point:
it's
not
a
nan.
It's
a
particular
big
pattern
for
the
stainless
marker.
Nands
are
valid
values.
H
Okay,
yes,
this
was
discussed
yesterday,
whether
I
think
we
we've
the
the
data
model
meeting.
Yesterday
we
had
an
open
question
about
stillness
markers
to
be
discussed
next
tuesday
morning
at
nine,
and
we
we
basically
need
to
say
what
man's
mean
and
I
think,
there's
a
question
of
whether
I
think
we
want
to
say
what
all
the
nands
mean,
which
means
something
some
sort
of
behavior
and
then
a
specific
man
is
going
to
mean
stay
on
a
smirker
for
prometheus
that'll,
be
discussed
next
tuesday.
I
Yeah,
so
from
from
eta's
standpoint,
stainless
markers
are
in
implementation,
detail
that
are
completely
hidden
from
the
users
and
from
a
user's
standpoint.
There
is
only
one
nam
and
we
always
use
one
value
for
it,
and
just
the
stainless
man
is
different
from
the
normal
man.
So
does
open
open
telemetry's
plan
to
support
from
a
user
standpoint
more
than
one
nan
or
is
it
just?
It's
nan.
H
This
is
what
we
need
to
discuss.
Russia.
H
Yeah
I
was
yeah,
I
think
we
were.
The
question
was
whether
I
triple
e
semantics,
fernand
kind
of
makes
sense
or
not-
I'm
not
for
us
in
this
particular
sort
of
realm
and
anyway,
just
plan
to
discuss
this
next
time.
There's
a
notion
of
whether
we
have
an
implicit
or
an
explicit
stainless
marker,
and
I
think
both
are
kind
of
meaningful.
But
I'd
like
to
discuss
that
next
week,
yeah.
B
A
Go
yeah,
so
these
are
kind
of
generic
questions,
so
the
work
I'm
doing
eks
forget
consent.
Container
inserts
is
highly
dependent
on
our
like
prometheus
receiver.
So
I
had
some
concern.
The
first
thing
is
like
so
as
far
as
I
know,
prometheus
receiver
is
planning
to
go
for
ga
by
end
of
may,
so
our
launch
date
is
also
like
may
27th,
so
is
it
still
accurate
or
we
have
changed
in
our
plan.
B
And
and
we'll
continue
to
keep
working
on,
you
know
the
other
issues
thereafter,.
A
Yeah
so
yesterday,
in
one
of
our
meetings,
so
a
point
came
out
that,
like
new
prometheus
receiver,
like
may
break
some
existing
user
experience,
but
I
am
not
sure
so
so
especially
my
concern
is
like
so
I
am
just
using
the
q
like
escaping
using
kubernetes
service
discovery
for
see
advisor
endpoint.
So
my
understanding
is
like
this
basic
escaping
fizzers
will
be
supported
anyhow
right
so
at
least
yeah.
G
We're
not
gonna
break
any
of
those.
There
is
nothing
that
we
are
gonna
break,
like
the
only
thing
that
may
change
is
some
of
this
remote
right
compliance
stuff,
and
you
know
the
configuration
is
not
going
to
break
or
anything.
F
There
are
some
changes
in
virgin.
Pr
is
in
that
will
change
the
metrics
that
are
produced
by
the
receiver,
but
that's
because
it
was
doing
the
wrong
thing
before
so
we
will
be
getting
closer
to
what
you
would
get
if
you
were
using
prometheus
directly.
G
A
These,
so
that's
good,
and
the
last
thing
was
like
we'll
support
existing
conflict,
so
I
also
feel
like
so
we
will
not
break
the
configurations
right,
so
the
existing
configurations
for
the
receiver
will
work.
Fine.
A
G
A
B
B
Okay,
I
think
the
next
item
we
had
on
the
please
start
agenda
is
grace
with
the
prometheus
receiver.
Pr.
K
Hey
yeah,
I
just
wanted
to
ask
for
some
feedback
for
the
pr
it's
extending
the
existing
bed
for
prometheus
receiver
per
testing.
K
B
B
Thank
you.
Moving
on
ianna,
you
had
a
pr
yeah,
the.
G
Other
one
was
about
the
exporter,
you
know
we
removed
the
queue
from
the
exporter
because
of
the
issues
and
we
are
implementing
something
that
looks
like
what
prometheus
does
in
the
exporter.
So
I
I
left
this
comment
because
it's
not
merged
and
currently
there's
no
way
to
fine
tune.
G
You
know
how
many
shards
you
want
in
the
export
and
so
on,
which
might
be
affecting
the
performance,
but
as
far
as
I
can
understand,
the
existing
performance
test
is
only
producing
all
tlp
and
not
using
the
remote
right
exporter.
At
this
point,
we
need
to
merge
this
and
fine-tune
a
bit
before
we
start.
Maybe
you
know
running
some
performance
tests
all.
B
Right
I'll
get
back
yeah
the
process
that
we
are
following.
I
mean
what
we,
but
I
discussed
with
bogdan
again-
is
that
if
we
can
get
reviews
from
anthony,
maybe
david
can
take
a
look
and
anyone
else
who
can
actually
do
and
review
code
review.
Then,
as
long
as
we
have
a
couple,
I
can
tag
it
ready
to
be
merged
and
bogged
and
will
merge
it.
I
mean
that's
right
I'll,
see
some
folks,
so
you
can
take
a
look
yeah.
So
as
soon
as
we
get
the
code
reviews,
then
we
can.
I
can
tag.
B
It
all
right
cool
cool.
Are
there
any
other
questions
or
updates
that
people
wanted
to
share
again,
brian
or
you
know,
kamal
or
anyone
else
who
is
participating
at
cubecon
any
updates?
B
B
G
G
To
understand,
if
there's
another
process
that
you
know
you
need
to
go
through
or
apply
for
compliance,
and
you
know
you're
a
bit.
We
we
don't
know
much
about
return.
B
Yeah
yeah,
okay,
thanks
any
other
concerns.
Questions
folks
have
again
one
of
the
things
that
we
are
starting
to
do
and
I'm
working
with
bogdan
on
this
is
just
fyi
that
will
triage
through
all
the
prometheus
bugs
that
are
also
in
the
collector
and
make
sure
that
they
are
reconciled
with
with
you
know
what
we
are
targeting
for
phase
one.
B
All
right
any,
I
guess
you
can,
if
folks
don't
have
any
questions
jay
anything
on
your
end.