►
From YouTube: John Sandra continues the discussion of a common CRD. In this case how to manage sed providers.
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
So
dant
this
is
a
question.
Then
it
was
maybe
a
few
meetings
ago
when,
if
I
think
there
was
maybe
some
consensus
that
things
are
just
kind
of
a
little
bit
of
stagnant
and
someone
I,
don't
recall
who
made
the
suggestion
of.
Maybe
we
ought
to
focus
on
the
CRD
and
you
try
to
come
up
with
the
CRD,
define
a
spec
and
use
that
is
the
try
to
way
to
move
forward
and
if
we
can
arrive
at
agreement
on
that
around
the
design.
C
So
it's
taking
a
top-down
approach
and
and
then
look
and
then
work
our
way
into
from
they're
getting
into
implementation
and
code
and
I
think
that
there
was
a
general
consensus
that
that
seemed
like
a
reasonable
approach.
And
so
then,
last
week
I
presented
a
initial
very
much
work
in
progress
doc
on
trying
to
focus
on
well
I.
C
Actually,
prior
to
last
week,
I
presented,
we
went
through
a
Google
Doc,
where
I
went
through
some
trying
to
group
sections
of
the
CR
DS
of
into
the
cluster
data
stacks
and
orange
operators
by
logical
grouping
of
various
sections
of
the
CR
D.
So,
for
example,
Cassandra
topology,
storage,
Cassano
configuration
and
and
try
to
compare
and
contrast,
but
it
would,
but
it
was
I
was
looking
for
least
common
areas
where
there
is
the
most
in
common
I
guess,
so
it
wasn't
well.
This
operator
has
X,
but
this
one
has
Y.
It
was.
C
C
There's
from
what
I
had
looked
at
I'd
say:
there's
probably
some
pretty
advanced
things
going
on
with
what
Cass
cops
doing
and
so
I'd
asked
him
if
he
could
one
we
get
talked
about
that
go
over
some
of
the
things
that
they've
done
with
that
orange
done.
With
around
storage
with
casco,
and
so
that
brings
us
up
to
to
date,.
D
Okay,
cool
well
yeah;
no
thanks
thanks
for
catching
me
up
on
that
and
I
think
that
kind
of
makes
a
ton
of
sense
I.
You
know
I
want
to
be
very,
very
respectful
of
the
work
that
everyone's
done
in
the
last
few
weeks
and
there's
knowledge
that
I
haven't
been
on
on
some
of
these
calls.
But
would
it
be
all
right
if
I
asked
a
few
blunt
questions
to
the
group.
D
Meeting
has
more
or
less
you've
already
written
an
operator
right
and
I.
Wonder
if
it
would,
it
would
maybe
make
sense
to
you
know,
solicit.
You
know
a
call
from
all
the
participants,
the
people
who
do
have
aa
prime
operator
IP
to
just
straight
up
make
a
donation
of
that
IP.
So
we
can
get
something
Hana
on
the
on
the
on
the
straightest.
Oh
hey,
Scott,.
D
Sorry
about
that,
so
within
a
donation
it
means
we
get.
Some
runs
on
the
board
right.
We
get
something
that
that's
working,
we
get
some
code
in
place
and
then
we
could
potentially
leverage
that
they
are
d
design
to
drive.
You
know
whatever
we
end
up
with
and
drive
that
towards.
You
know
kind
of
more
of
that
that
optimal
design
right
and
you
know
that's-
that
optimal
operation
right.
D
C
Leaving
work
behind
and
I
I
can
absolutely
understand
in
a
respect
that,
because
everyone's
worked
really
hard
on
their
respective
operator
projects
and
so
I
think
that's
where
things
are
kind
of
at
a
standstill
and
so
I
I'm
not
sure
if
there's
I'm
not
sure
how
receptive
the
idea
of
starting
from
basically
starting
from
I
hate
to
say
starting
from
scratch.
But
but
but
you
know,
it's
not
just
simply
saying
well
we're
going
to
take
the
code
code
base
of
this
operator
and
use
that
a
starting
point.
C
D
D
D
You
know
but
I
think
if
you
know
if
everyone
feels
like
they've
got
a
solid
path
to
you
know,
even
if
we
do
a
little
bit
of
this
upfront
design
but
then
accept
multiple
donation.
Based
on
that.
That
might
be
a
good
happy
in
between
you
know,
I.
Think
some
of
the
driver
ideal
for
momentum,
but
I
think
the
community's
broader
requirement.
B
B
You
know,
maybe,
if
you
don't
want
to
admit
it,
that's
kind
of
I
see
that
as
part
of
the
process
to
you
is
like
okay,
there
might
be
some
things
that
are
better
for
a
community
style
operator,
but
I
I,
just
feel
like
and
I
would
love
to
hear
a
rebuttal.
Is
that
once
we
get
to
this
place,
that
most
of
the
code
that
there's
needed
to
make
to
reflect,
the
CRD
is
already
written?
F
To
come
back
on
what
Ben
said,
I
mean
we
we
mentioned
earlier
that
we
wanted
to.
We
were
happy
to
make
a
donation
of
gas
cops
code.
We
are
still
discussing
how
to
do
that
internally,
but
it's
not
something
that
orange
does
often,
but
we're
quite
quite
positive
is
going
to
be
possible
and
our
view
is
now
that
we
have.
We
are
currently
finishing
the
backup
and
restore
functionalities
with
instructors
to
help.
So
once
that
is
done,
for
us,
cash
crop
will
be
feature
complete
and
would
be
a
prime
candidate
for
donation.
C
D
Well,
I
think
that
sounds
I.
Think
that
sounds
good,
like
you
know,
I
think
you
know
and
apologies
all
I
know.
This
is
how
to
make
just
getting
up
to
state,
and
you
know
it
is
one
of
the
drawbacks
is
of
deserving
versus
the
dead
list
stuff
and
to
catch
outside
of
things.
But
you
know
it
sounds
like
it's
kind
of
a
good
happy,
medium
or.
F
D
B
Alright
well
I
mean
no
I
and
actually
I
appreciate
that
band.
It's
always
good
to
have
a
level
set.
You
know
and
I
appreciate
your
leadership
in
this
I
mean
you
you've
been
doing
this
for
a
long
time
and
it's
it's
good
to
just
make
sure
that
we're
all
on
the
right
page
here
not
the
same
page,
the
right
page
but
good
well,
should
we
continue
was
what
we
were
going
to
work
on.
I
I!
Think
that's!
That's
where
we're
at
so
John,
okay,.
F
D
C
Yeah
I
need
to
share
my
screen.
Let
me
find
those
questions.
I
went
and
I
watched
the
video,
so
I
could
make
some
notes
and
then
make
some
changes
and
then,
while
I'm
doing
that,
also
mention
that
if
anyone
hadn't
noticed
front
put
a
couple
poles
on
the
slant,
Channel
and
I,
my
apologies
that
I
just
I
got
tired
of
other
things
and
had
completely
forgotten
about
them.
But
so
I,
it's
good
to
see
more
activity
on
slack
I.
C
So
I
went
through
and
there
were
a
lot
of
questions
and
comments
related
to
the
dock.
That
I
shared
in
the
last
meeting
and
I
didn't
have
a
chance.
I
wasn't
able
to
take
notes,
as
the
meeting
was
going
so
I
went
back
and
and
watched
the
video
and
and
made
some
notes,
the
I
think
and
I'll
flip
over
to
the
document
in
in
a
moment,
but
I
think
there
are
some
of
the.
In
my
mind,
there
are
a
couple
key
questions
which
was
around
well.
Actually,
let
me
yeah.
C
There
was
in
the
end
of
design.
I
had
four
had
a
data
center
type
and
the
cluster
type
and
within
a
cluster
type,
allowed
the
data
centers
to
be
declared
in
line,
as
well
as
by
essentially
using
references,
and
there
was
a
lot
of
questions
around
the
implications
of
the
you
know
how
that
would
work
about
the
main
defined
in
line
and.
C
C
There
were
a
couple
of
the
questions
that
came
up
around
default,
setting
what
you
know,
what
are
what
default
should
be
used
and
I
wanted
to
address
so
I
called
it.
A
couple
of
those
are
some
good
questions.
I
think
Patrick
raised
a
couple.
Questions
are
on
that
around,
for
example,
the
what's
niche
or
houses,
the
steps
for
you
configured
and
I,
put
in
here
using
gusting
property
files,
hitch
and
then
but
configured.
What
about
configuring
seeds.
C
A
Answer
the
question
forecast
cop,
so
chess
club
does
it
by
default,
but
we
have
multi
cask
up
on
top
of
it
and
it
can
override
the
seats
because
we
can
have
I
mean
multi
guys
got
be
used
for
multi.
There
are
centers
where
I
mean
multi
cue,
Benares
clusters,
so
they
don't
know
each
other.
So
the
more
tag
a
Scot
just
said
the
seeds
on
each
one
in
order
for
them
to
be
able
to
communicate.
A
D
Yeah
so
the
same
same
with
the
interesting
cluster
one
and
a
wee,
so
we
ended
up-
is
a
patch
in
Kazan
and
kind
of
patch
waiting
in
Cassandra
for
this
as
well,
but
we
ended
up
changing
the
way
the
simple
seed
provider
works
when
it
came
to
resolving
IP
addresses
from
a
DNS
record,
so
we
would
actually
just
pass
in
a
kubernetes
service
record
to
the
simple
seed
provider,
and
then
we
had
a
patch
to
only
you
know.
Take
a
number
of
IP
addresses
from
the
service
record.
D
Kubernetes
does
guarantee
order
of
the
IP
addresses
from
the
service
record,
I
believe
if
you
configure
it
correctly
as
well,
so
you
know
we'll
kind
of
get
big
the
consistent
seeds
occurring
there,
but
then,
as
Frank
mentioned,
we
also
allows
you
to
override
that
it's
not
it's
not
an
override.
As
such.
In
the
CR
D
spec
itself.
We
have
a
mechanism
where
you
can
provide,
you
know
kind
of
never
the
contact
resources
or
the
file
resource
within
kubernetes.
You
can
drop
the
amyl
fragments
in
there
as
overrides.
D
So
you
know
that
was
our
primary
contact
override
mechanism
and
you
could
certainly
override
you
know
the
seed
provider
that
way
if
you
wanted
to
but
I
think
it's
this
whole
topic.
You
know
around
the
in
line
and
also
the
stage
and
the
discovery
you
know,
I
think
will
quickly
kind
of
keep
hitting
on
the
fact
that
you
know
Trueba
Nettie's
kind
of
multi
region
almost
across
the
story.
It's
still
ill-defined
that
at
best,
so
I
think
we
want
to
leave
as
much
flexibility
for
ourselves
once
that's
just
a
little
IAP
string.
A
A
Yeah
yeah,
so
we
wanted
to
do
to
you
to
do
something
like
that.
But,
like
you
said,
Cuban
Aires
doesn't
support,
motile
I
mean
I,
don't
remember
the
name
but
no
replication,
but
Federation
yeah.
So
as
soon
as
it
does
yeah,
we
could
just
have
our
own
provider.
That
will
talk
to
human
Aires
itself
to
know
exactly
what
are
the
knows
that
all
part
of
the
same
clustering
and
just
set
automatically
those
seed.
So
we.
H
A
H
C
A
I
was
no
I
mean
I
was
talking
about
the
seed
provider
class,
because
this
is
just
a
way
to
set
seeds
without
having
to
change
the
configuration
anyhow.
So
the
node
will
just
know
how
to
get
the
seed
list
without
you
having
to
or
anything
having
to
configure
their
the
other
Cassandra
node.
That's
just
it
so
it
would
be
more
dynamic.
Oh
okay,.
D
You've
got
to
allow
it
to
be
queryable
before
the
service
marks
itself
is
ready.
So
it
means
you,
then
have
like
two
different
service
records,
one
that
is
like
based
on
you
know
when
it
marks
itself
is
ready
and
all
the
checks
pass
and
then,
as
this
feeds
specific
service,
so
that
it
can
kind
of
use
that
before
everything's
ready
and
do
discovery.
That
way,
you
know
and.
E
B
B
D
Yeah
I'd
agree
with
that.
I
mean
you
know,
I.
It
would
be
nice
to
have
a
kubernetes
api
base,
one
because
it
simplifies
and
from
a
configuration
and
just
a
design
perspective.
But
you
know
you're
throwing
the
the
are
back
stuff
and
you
know
that's
that's
why
we
ended
up
going
this
date
with
Ben
CNET's
records,
yeah.
C
C
All
right
so
I
think
one
takeaway
I,
think
that
and
I
been
mentioned,
is
flexibility,
so
I
think
or
a
point
where
less
is
more
and
so
I
will
they
I'll
switch
back
over
to
the
dock
and
move
on?
Oh
I,
have
it
to
do
for
adding
something
for
the
the
pod
disruption
budget,
so
I
think
I
think
all
the
operators
are
using
disruption,
budget
and
I
can't
remember
off
top
my
head,
which
ones
exposed
some
of
that
I
know
some
of
them.
C
F
F
C
F
F
C
F
C
C
Or
yes,
or
if
there's
something
a
particular
area
that
you
know
you'd
like
me
to
try
to
you,
know
I'm
trying
to
carve
out
so
much
time
we
devote
toward
this.
So
if
there's
a
particular
area,
let
me
know
and
I'll
try
to
make
that
happen
and
just
be
prepared
for
me
to
bombard
you
with
questions
all
right.
C
Okay,
so
we
have.
We
have
now.
I
have
I'm
missing
the
most
important
thing.
The
data
directory
for
Cassandra
here,
logs
debug
logs
GC
logs,
keep
them
if
you're
doing,
if
you're
running
a
profiler
and
then
any
storage
requirements
for
sidecars
they're,
all
things
or
there
or
as
well
as
any
may
be
others
for
which
we
need
volumes.
C
The
default
data
directory
for
Cassandra
everything
get
stored
under
varlyn
cassandra
and
then
the
logs
get
stored
under
VAR,
logs
cassandra
and
just
a
briefly
showed
what's
done
in
calves
operator
and
then
an
example
and
from
Casca.
This
is
only
true.
Not
the
examples
here
are
just
focusing
in
on
the
the
spec,
not
what
no
we
actually
get
the
operator
generates
in
casts
operator.
C
A
A
H
Can
no
we
added
it?
Okay,
that
was
in
the
last
release,
but
all
the
stuff
with
like
adding
storage,
mounts
I,
don't
think
we
support
or
want
to
support
today,
but
I
do
think.
We
eventually
wanted
to
support
them,
so
we
just
haven't
worked
on
it
yet
so
in
here
anyhow,
you
all
do.
It
is
very
interesting,
but
yeah
the
tailing
tailing,
the
logs
sidecars
yeah.
We
have.
We
stand
up
one
of
those
in
our
default
thing.
So,
okay,.
H
A
C
A
I
mean
we
can
see
it
on
the
on
the
screen.
I
guess
are
we
going
to
share
with
the
documentation
that
it's
the
same
I?
Think
okay,
yeah
I,
think
it's
the
same,
so
like
I
was
saying
we
are
able
to
define
storage
volumes
and
those
volumes
are
aimed
to
be
mounted
by
sidecar
sidecar
containers.
So,
for
example,
we
want
to
be
able
to
send
logs
the
garbage
collection
collector
logs,
or
we
want
to
be
able
to
have
some
particular
probes.
A
Listening
to
special
events,
I
mean
it's
a
it's
not
up
to
us.
It's
more
up
to
the
to
the
season
mean
or
the
ops
who
want
to
have
more
control
and
more,
you
know,
add
their
own
props
or
something
that
they
define
or
develop
before,
and
that
is
already
running
on
product.
So,
of
course,
we're
trying
to
talk
to
them
and
discuss
and
and
try
to
figure
out
if
it
makes
sense
or
not,
but
usually
I
mean
if
they
need
it,
we
need
to
provide
a
way
for
them
to
use
it.
A
So
here
we
have
two
ways
to
do
it.
The
first
one
is
the
storage
volumes
where
they
can,
they
can
mount
particulars
spaces
in
order
to
access
logs
to
access
anything
that
is
on
the
docker
containers
and
the
other
one
is
the
sidecar
and
in
the
cycle
you
just
specified
the
mount
volumes
that
you
want
to
access
and
those
mounted
bottoms
are
are
just
of
the
storage
volumes
that
we
defined
earlier.
So
you
can
decide
what
you
want
to
mount
for
which
side
car
and
then
you
can
have
access
to
it
and
use
it.
I
A
F
A
The
garbage
area
yeah
we
did.
This
is
one
thing
that
we
we
hate.
We
cannot
have
access
to
the
stop.
The
word
metrics
directly,
it's
something
that
you
need
to
your
neighbor
on
the
JVM,
for
it
to
log
how
much
time
is
that
the
world
takes,
and
so
we
have
to
do
it
and
it's
another.
It's
in
another
log
file,
and
we
have
to
be
able
to
you,
know,
send
this
log
file.
A
I
think
this
is
pretty
much
it
I'm
trying
to
look,
but
I,
don't
think
there
is
much
to
see
and
when
we
define
a
sidecar
I
mean
sidecar
containers
we
just
used.
You
know
the
existing
type,
which
is
the
one
I
mean
which
is
just
container
and
did
that
way
you
can,
if
configure
or
define
everything
that
you
want
at
the
container
level,
we
don't
put
any
limit.
H
C
So
that's
a
that's
one
of
the
questions,
one
of
the
things
I
captured
blow,
Thank,
You,
Cyril
and
so
I,
like
the
flexibility
really
like
the
flow
and
I
wanted
to
have
this
example.
Here,
like
the
flexibility
and
then
I
also
like
in
to
me-
and
it's
to
me
it's
a
say:
it's
a
kind
of
a
best
practice.
You
want
to
have
those
logs
this
the
system
logs
exposed
so
I
like
the
fact
it
also
expected
cast
operator
I,
don't
need
to
think
about
it's
just
that's
there.
C
I'll
need
I
need
the
system
logs
I
might
need
debug
log
for
our
I,
want
it
phone
and
look,
for
example,
query
logs
or
I
might
need
you
know
the
the
G
slide
now
I'm
dealing
ninety-nine
percent
of
the
time
and
99.9
percent
I'm
with
clusters
that
are
not
ready
in
kubernetes.
So
it's
easy
to
get
these
things,
but
in
kubernetes
it's
a
different
story,
and
so
I
want
for
me.
I
want
at
least
have
the
flexibility
to
sit.
C
You
know
to
be
able
to
without
too
much
configuration
make
these
different
things
accessible
and
auras
like
Jim
raised
a
question
about,
or
maybe
I
want
to
put
the
commit
log
on
a
different
volume,
and
so
so
what
I
started
sketching
out
not
too
long
before
the
meeting
was,
since
we
know
that
there's
a
number
of
explicitly
defining
the
so
not
too
different
than
what
we've
already
seen
some
regards
having
a
vine
claim
for
the
data
volume.
So
here's
what
I
was
trying
to
strive
for
it
didn't
really
figure
it
out
having
a
by
default.
C
C
Other
sidecar,
like
backup/restore
as
an
example
or
any
other
side
cards
that
you
might
might
be
using.
The
challenge
that
trying
to
couldn't
really
sort
out
is
the
fact
that
you
can't
have
separate
distinct
volume
mounts
with
you
know,
with
the
same
base
path,
and
it's
going
to
be
the
last
mount
wins.
C
C
A
B
D
So
on
on
the
logging
side,
and
is
that
something
that
we
would
maybe
just
leave
up
to
log
back
and
just
leave
it
as
a
log
back
configuration?
So
if
people
want
to
write
it
to
a
volume
somewhere,
they
can
to
it.
But
if
you
know
they
want
to
push
those
long
somewhere
else
using
log
back
to
do
that
like
that
would
be
my
gut
feel
on
that.
C
C
Like
you
have
now,
it
seems
like
that,
for
example,
with
the
system
logs
being
tailed
and
to
an
empty
deer
volume,
or
if
you
want
to
send
them
somewhere
else
and
not
have
and
try
to
do
so
in
a
way,
that's
going
to
minimize
the
number
of
places
that
the
end
user
is
going
to
have
to
touch
to
make
that
change.
Ideally,
it's
just
one
change
in
the
spec
one
property
to
do
it.
I
would
consider
a
victory.
D
H
G
D
A
H
Sure,
but
all
those
Java
Java
logs
they're,
not
I,
don't
know
that
they're
configurable
at
all,
you
can.
You
can
redirect
where
those
go,
then
you
can
flush.
Everything
I,
think
it's
like
dead
standard
out
or
whatever,
like
the
standard
nginx
container,
does
that
to
make
its
logging
adiabatic
cuz.
You
know,
nginx
is
the
same
thing.
Right
has
access
login,
an
error
log,
but
then
log.
H
H
H
My
logs,
like
they're,
just
gonna,
drop
this
stuff
on
the
floor,
because
it's
not
gonna
parse,
so
I
do
think
actually
keeping
these
things
as
like
separate
streams,
because
the
format
for
the
GC
logs
is
like
consistent
with
itself
like
you
could
write
a
parser
for
that,
but
you're,
never
gonna
write
a
good
parser
for
it,
mixed
in
with
the
cassandra
logs
mixed
in
with
the
full
full
query
log.
So
what
I
haven't
ever
I
don't
know
if
I
even
seen
that
format.
D
D
G
D
H
C
So
that
I'd,
so
the
ideal
scenario
is
users
have
proper,
presumably
have
a
ideal
scenario.
Is
I
have
a
centralized
logging
system,
that's
capturing.
It
looks
at
ELQ
SEC
capture,
my
logs
and
structured
logs,
but
certainly
starting
out
a
lot
of
people.
Maybe
most
aren't
going
to
be
in
that
ideal
scenario.
Right.
H
C
Self,
it's
like
one
of
the
things
that
we
do
is
my
team
does,
is
we'll
use
the
tool
use
of
internal
tools
that
will
it's
basically
just
some
shell
scripts
that'll
do
and
scrape
notes
again.
This
is
outside
of
kubernetes
and
it
will
collect
logs,
various
metrics
and
so
forth
and,
as
you
know,
I've
thought
before.
C
Well,
how
would
that
work
plying
the
communities
world
and,
if
you're,
not
using,
you
know
in
a
lot
of
times,
you
know
it
could
be
on
clusters
that
are
unstable
and
so
that
maybe
there's
some
volatility,
and
so
you
know
I
could
easily
see
a
scenario
where
well
you've
got
it.
You've
got
your
settings
and
your
logs
are
being
are
not
persistent
and
are
trying
to
use
their
tool
and
say:
oh,
we
don't
have
any
logs.
So
we're
not
able
to
do
a
lot
of
the
analysis
and
then
the
customer
says
ok.
Well.
C
C
H
D
B
B
It's
always
fun
to
be
the
YouTube
guy,
because
by
the
way
YouTube
has
gotten
a
really
cool
set
up
and
I'm
gonna
say
that
while
we're
recording,
so
they
can
hear
that
and
the
Google
Lord's
will
bless
me.
But
I
will
get
this
going
again.
We
have
meeting
next
week
we'll
pick
this
up
from
there
John.
If
there's
anything
in
between
just
put
it
up
on
the
slack
all
right
thanks.
Everyone.