►
From YouTube: 2020-01-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
I'm
recording
and
man
john,
I
told
you
best-
face
best
face.
Yes,
all
right,
so
first
things.
First,
let's
do
a
quick
update
on
cow's
operator
last
week.
Cyril
gave
us
a
really
good
update
cyril.
If
you
want
to
do
it
again
or
jim
you're
here
today.
If
you
want
to
take
a
stab
at
it
or
you
guys,
can
double
team
on
this.
B
Let's
see
so
there's
a
bunch
of
stuff
collected,
that's
cereal
asked
me
hey.
What's
like
the
release
policy
and
for
the
first
few
months
we
tried
to
go
with
one
release
per
month,
but
then
you
know
summer
hit
and
I
didn't
want
to
stress
anyone
out.
So
I
was
like
okay,
we,
you
know
it's
okay,
if
we
miss
a
month
and
then
I
think
we
got,
I
think
we
ended
up
waiting
like
eight
weeks
between
releases
that
are
maybe
nine
that
was
fine,
then
the
fall.
I
think
people
got
busy
with
other
work.
B
We
got
another
release
out
in
like
just
after
kubecon
and
now
you
know
we
have
some
stuff
built
up
that
you
could
release
it's
fine.
I
pushed
a
little
bug
fix
release
the
release.
Machinery
is
better
than
before.
B
I'd
like
the
documentation
to
get
better-
and
you
know
have
talked
to
chris
bradford
a
bunch
about
that.
B
So
that's
kind
of
like
what
has
happened
and
then
what
what
can
yet
happen
is
I
promised
frank
that
I
was
going
to
get
up
an
example
of
how
to
create
some
docker
images
that
are
compatible.
I
put
that
in
the
as
a
comment
to
the
ticket
today
and
I
made
like
just
a
personal
github
repo,
it's
just
an
example.
B
C
B
Jeremiah
who
joined
he
pointed
it
out,
he's
like
hey
at
404,
and
I
was
like
oh,
like
the
default
settings
will
make
a
private
repo.
It
should
be
there
now
sorry
about
that,
for
the
hiccup.
If
you
saw
the
ping
come
through
yeah,
it
probably
didn't
work
until
half
an
hour
after
the
github
email
went
out.
B
B
I
I've
wanted
to
talk
about
like
who,
like
kind
of
committer,
writes-
and
I
know
chris
bradford
was
talking
about-
maybe
moving
cass
operator
to
the
kate
sandra
org
yeah-
I'm
I'm
perfectly
comfortable,
just
sort
of
being
a
team,
a
team
player
here-
I
I
don't
really
want
to
be
like
the
central
organizing
force.
So
so
I
mean
things
are
moving
in
the
right
direction.
I
think
we
just
need
to
be
more
organized.
A
C
Yeah,
so
we
had
some
discussions
with
john
around
up
some
tickets
were
created.
I
don't
know
if
there
were,
if
some
references
were
added
to
the
it
gets
to
the
ticket
I
created.
Maybe
we
should
add
some
references
to
your
new
ticket
in
there
john,
but
I
know
there
were
some
tickets
on
cassandra
and
maybe
on
the
medusa
operator,
but
I'm
not
sure
on
that
side
or
it
needed
to
happen
to
happen
at
some
point.
C
So
I
also
asked
john
for
a
list
of
the
the
current
supported
features
and
the
missing
features
based
on
what
we
said
before
we
added
a
new
feature
in
our
backup
where
we
can
restore
a
table
as
another
table,
so
on
a
new
name
as
long
as
the
structure
is
already
existing,
so
yeah
those
features
are
going
to
be
wanted
if
we
migrate
for
sure
and
besides
that
yeah,
I
think
I
just
wanted
to
yeah
try
to
get
those
issues.
C
D
Just
a
quick
follow-up
about
the
tickets,
where,
unfortunately,
there's
not,
I
don't
think
there's
a
single,
a
single
good
spot
to
create
them
because,
depending
on
you
know
what
it
is,
we
may
have
changes
that
we
need
to
touch
medusa
because
operator
and
then
the
medusa
operator-
and
I
I'd
explained
to
to
cyril
that
the
what
it
has
hasn't
been
well
defined.
But
the
process
that
I've
been
following
is
since
most
people
on
the
kate
sanders
side,
because
most
people
probably
they're
looking
at
the
kate
sander
repo.
D
Even
if
I've
gotta
do
a
bug
fix,
say
in
the
medusa
operator
I'll
create
the
issue
in
the
kate
sand,
repo
and
just
and
then
you
know,
of
course
the
pr
would
go
to
the
whatever
repo,
just
because
that's
the
best
visibility,
but
I'm
happy
to
create
tickets,
where
you
know,
wherever,
wherever
it's
going
to
be
the
most
convenient
and
make
more
most
sense
for
people
to
to
do
that,
and
just
you
know,
link
them
as
necessary.
D
But
I
just
wanted
to
share
that.
So
there's
a
bunch
of
bet:
there's
there's
a
bunch
of
backup
restore
related
tickets
in
the
kate
sander
repo
right
now.
There's
a
backup
restore
label.
I
don't
think
there's
a
ticket.
Yet
in
fact
let
me
rephrase
it.
I
know
that
there
is
there
currently
is
not
tickets
for
all
the
features
that
cereal
has
gone
through
with
me.
So
that's
they'll
still
need
to
be
written
up
yet.
A
All
right,
very
good,
so
jim
dropped
that
little
nugget
about
moving
repos
or
centralizing
them
a
bit
just
to
get
them
out
of
the
datastax
org.
For
now
I
think
it's.
This
is
just
a
discussion
point,
not
a
decision.
Chris,
you,
you
were
the
one
who
brought
it
to
me.
E
Yeah
we've
been
talking
about
it
a
bit
internally
and
it
really
worked
the
the
goal
of
not
just
kate
sanders,
the
helm,
charts
but
also
of
caz
operator
and
has
config
builder
and
casting
fibular
definitions,
and
all
of
these
efforts
right
have
been
around
running
cassandra
on
kubernetes
period
right
and
we
felt
that
bringing
these
together
under
a
common
organization
with
those
goals
is,
is
a
key
part
of
that
and
to
make
it
less
destroyed.
E
And
so
that's
the
the
the
part
of
the
thoughts
that
went
behind
there.
One
of
the
other
things
we've
noticed
at
least
while
working
on
on
the
center
pieces
right
now,
a
lot
of
the
helm
chart
updates
to
the
kaz
operator
are
locked
to
cause
operator
releases.
Despite
some
changes,
they're
not
actually
requiring
a
change
to
the
cas
operator,
and
we
wanted
to
make
kind
of
that
life
cycle
a
little
bit
shorter.
E
If
you
will
to
get
features
iterated
on
in
the
helm,
chart
that
again,
don't
don't
require
changes
within
the
role
of
cads
operator,
so
that
was
one
of
the
other
motivating
factors
but
we'd
I'd
love
to
get
some
feedback
here.
On
cyril,
frank
on
your
views
in
this
space-
and
maybe
some
of
the
gotchas
that
you
can
see
happening.
C
Sorry,
I
I
missed
your
point,
your
question.
I
was
answering
something.
E
E
A
Excellent
yeah,
the
the
discussion
that
chris
and
I
were
having
was
you
know,
starting
to
peel
it
away
from
the
data
stack
sword.
This
is
a
good
way
to
do
that.
You
know
the
key
sander
org
is
somewhat
independent.
You
know
it
got
moved
around,
it's
not
a
data
stacks
repo,
you
know,
so
that's
that's
an
eye
towards
the
future
like
what
is
this
going
to
mean
in
the
future?
A
And
I
you
know
my
my
personal
point
of
view
on
these
sorts
of
things
are.
This
is
an
open
source
motion,
not
a
you
know,
and
it
doesn't
need
to
land
in
data
stacks.
Specifically,
I
mean
it's
been
okay
there
you
know,
and
that's
the
other
thing
I
worry
about
is
that
we're
going
to
break
something
that
isn't
broke
or
fix
something
that's
not
broken,
but
as
I'm
looking
at
more
participation
in
the
community,
I
think
it
just
has
better
optics
overall.
B
Yeah,
I
think
kate
sandra
seemed
to
hit
hit
a
nerve
in
a
good
way
and,
like
we've
seen,
a
lot
of
people
come
in
interested
to
use
it.
So
if
it's,
if
it's
more
confusing
for
it
to
stay
under
destax,
that's
bad,
if
it's
more
confusing
to
change
it,
because
we've
started
to
get
some
expectation
to
look
for
it
there,
I
I
don't.
I
don't
have
very
strong
feelings
here.
B
A
E
In
that
regard,
specifically,
there
is
a
project
spinning
up
right
now
to
get
a
cast
operator.
Documentation
website
launched
I've,
I'm
working
with
some
external
contributors
to
to
do
design
and
the
the
site
creation.
Obviously
we
gotta
bring
a
bunch
of
the
content
to
the
table,
but
that's
that
is
actively
happening
right
now,
whether
or
not
it
should
be
nested
underneath
the
case
underneath
the
kate
sander
project
or
underneath
the
heat
standard
domain.
E
I'm
punting
that
for
now
it's
going
to
have
its
own
tld,
despite
being
under
the
kate
sander
organization,
so
cast
operator.io,
there's
nothing
there
right
now
you
can
go
to
it.
It's
not
going
to
load
anything,
but
that's
where
it
it
will
be
living,
and
so
that's
that
is
something
that
is,
has
been
sorely
needed
and
we
recognize
that
and
when
I
get
the
the
process
started
there,
it's
yeah,
and
so
that's
will
also
be
open
source
part
of
the
cause
operator
repo.
E
So
as
long
as
it's
made
its
way,
a
commit
has
made
its
way
into
the
docs
folder
on,
I
think
masters
the
main
branch
for
that
repo.
It
will
get
published
out
to
that
website.
That's
the
goal!
It's
similar,
we're
doing
the
same
thing
in
kate
sander.
If
you
submit
a
pr
to
the
docs
folder
and
it
gets
merged
into
main
it
gets
deployed,
and
so
anybody
can
can
put
docs
changes
out
there.
It's
using
the
same
template
as
kubernetes,
so
it's
doxy
on
top
of
hugo.
E
So
it's
just
marked
down
in
a
bunch
of
folders
and,
like
I
said
you
should
see
some
traction
on
that
in
the
in
the
coming.
E
A
That's,
I
think,
that's
from
a
community
standpoint,
I'm
anxious
to
get
potentially
some
contributors
in
that
arena.
That's
that's
one
of
the
hardest
things
to
try
to
get
contributors
for
it's
like.
Oh
you
liked
it.
Can
you
write
some
documentation
yeah,
it
sounds
exciting,
but
it
it
does
help
quite
a
bit.
You
know
there
and
there's
as
long
as
you
have
a
clean
way
to
get
it
done.
Sometimes
it's
easier.
So
you
know
that's.
I
think,
that's
a
good
way
good
thing
to
demonstrate.
A
I
might
even
make
a
quick
video
on
how
you
can
contribute
to
the
docs.
If
you
want
to,
I
think
some
of
the
first
pr's
we
had
were
like
hey.
You
know
this.
This
word
was
wrong
and
you
know
it
was
just
like
a
really
really
simple
pr,
and
those
are
great
I
mean
we'll
take
a
million
of
those
every
day.
Well,
a
couple
more.
B
We
could
definitely
take
a
lot
of
those
yeah.
Some
yeah
we've
got
some
yeah,
we've,
definitely
gotten
some
more
the
first
one
like
astonished
me,
but
now
it's
like
at
least
like
two
a
month
or
so
of
small
small
changes
or
small
things
hitting
the
repo,
or
somebody
pointed
out
like
oh
you're,
using
busy
box
latest.
I
was
like
I
was
like
yeah.
I
regret
that
and
I
regret
using
and
then
he
was
like.
Well,
you
could
just
tag
it.
B
I
was
like,
oh
that's
already
better
than
what
I
had
so
sure
and
got
that
accepted
pretty
quickly.
Yeah
yeah.
I
think
I
think
that's
moving
like
given
where
this
was
when
we
opened
it
up
in
mar.
At
the
end
of
march,
like
we've
moved,
we've
moved
pretty
far
forward
in
a
lot
of
ways,
but
yeah
there's,
there's
a
you
know
a
healthy
gap
to
cover
still
yet.
B
I
think
it's
interesting
the
questions
that
come
up
that
we
probably
just
need
to
that's
a
lot
of
questions
come
up
that
I
think
just
docs
are
the
bug
fix
for
where,
like
questions
about
how
to
manage
the
logs
or
log
volumes,
or
it
seems
like
folks,
who've
not
run
something
in
kubernetes
before
which
I
find
interesting,
like
they
need
sort
of
some
general
advice
to
set
up
a
logging,
sidecar
or
a
log
aggregation
system
which
is
fine,
but
I
didn't
expect,
would
sort
of
be
our
our
role
in
kubernetes
community
is
to
help
kind
of
give
people
pointers
back
to
the
official
kubernetes
docs
that
show
this
really
well
right.
B
So
I
think
I
think,
there's
something
there,
patrick
like
something
you'd
be
you'd,
be
interested
in.
It's
like
we're
getting
people
who
know
more
about
cassandra
than
they
do
about
kubernetes,
but
they're
determined
to
kind
of
try
to
push
through
there.
E
E
So
I
think
whether
people
are
driving
our
new
user
are
getting
started,
steps
or
looking
for
how
where
they
would
hook
in,
like
prom
tail
or
fluentd
right
to
understand
like
for
logging.
It
you
don't
necessarily
need
to
know
like
step
by
step.
This
is
how
you
set
up
the
logs
for
for
prom
tail,
but
they
might
need
to
know
hey.
You
should
probably
add
a
container
in
the
pod
template.
Spec
here
and
the
the
values,
the
the
options
for
this
container
and
then
point
that
out
to
the
to
the
frontal
docs.
A
A
That
sort
of
thing
where
we
can
potentially
get
more
focused
help
on
that,
and
if
we
go
to
google
summer
of
docs,
which
is
a
thing,
it's
really
nice
to
present
something
that
says,
oh
here
and
here's
the
easy
way
to
do
it,
because
you
know
there's
we
can
get
a
real
good
pull
on
that.
If
we
have
an
easy
way
to
get
in
cool
all
right,
so
just
kind
of
bottoming
out
the
discussion
on
this
move
of
repos,
I
it
doesn't
sound
like
anybody,
has
any
strong
objections.
A
I
would
like
to
make
sure
that
we
get
as
many
voices
in
here
as
possible
and
probably
just
my
first
guess
is
like
okay,
we'll
just
throw
this
in
the
asf
cassandra,
kubernetes
room
and
say:
hey,
look
we're
going
to
do
this
and
any
strong
objections,
but
I
don't
know
how
or
even
putting
it
on
the
mailing
list-
or
I
don't
even
know
where
to
put
this,
I'm
open
to
suggestions.
E
A
E
G
G
But
yeah,
so
I
I
mean
you
probably
definitely
want
to
send
something
out
on
the
users
list
like
when
you're
doing
it
or
possibly,
even
before
you
just
to
tell
users
hey
we're
moving
this
if
you
were
searching
for
it
here,
it's
going
to
be
over
here
now.
E
G
E
F
I
think
to
put
it
on
the
mailing
list
would
be
quite
good
because
there
hasn't
been
much
talk
about
operators
on
the
mailing
list
recently,
as
I
think,
and
because
we're
talking
about
moving
repos
that
may
trigger
some
interest
from
the
developers
they
may
come,
have
a
look
or
maybe
participate
so
and
at
this
it
will
be
written
in
there
instead
of
in
the
slack
where
it's
just
like
happening
but
disappearing
as
well.
So
I
would
vote
for
the
mailing
list.
Personally,
my
two
cents.
E
G
I
mean
it
just
depends
on
what
your
goal
is
chris
right,
if
your
goal
is
to
get
developers
knowing
about
you
doing
this,
you
put
it
on
like
the
core
cassandra
committers,
knowing
you
put
on
standard
devel
dev,
and
if
you
want
users
to
know
about
it,
you
put
on
user,
so
it
just
depends
what
your
goal
is
in
doing
it
right.
If
your
goal
is
hey,
do
you
think
this
is
bad?
If
we
do
this,
you
might
put
it
on
dev.
If
your
goal
is
announce
that
we're
doing
this,
put
it
on
user.
A
Yeah,
I
I
don't
think,
there's
anything
wrong
with
over
communicating.
I
mean
if
there's
no
one,
no
one's
gonna
say
wow.
Can
you
stop
talking
about
this?
No
they'll
just
do
what
they
normally
do
is
ignore
the
mailing
list,
but
I
mean
yeah
it's
I
I
tend
to
over
communicate
as
a
default.
A
B
Yeah,
just
I'm
guessing
chris
will
drive
this
forward,
but
just
ping
me
when
that's
happening.
I
think
that
the
github
actions
will
probably
break
for
some
stupid
to
legitimate
reason-
and
I
don't
know
I'd
like
to
I'd
like
to
keep
on
that
stuff,
because
it's
worse
when
you
have
something
that's
like.
Oh
someone
reported
a
pretty
urgent
bug.
I
want
to
get
it
patched
and
fixed,
which
happened
like
a
week
and
a
half
ago
and
with
like
decommissioning,
not
working,
and
they
were
using
it
and
like
okay.
Well,
they
had
the
fix.
B
So
it
was
good
that
all
the
automation
was
just
like:
okay,
just
fire
it,
but
don't
this
isn't
to
say
it's
a
blocker.
I
just
would
like
to
try
to
get
ahead
of
it.
I
think.
A
E
But
to
point
yeah
secrets
might
get
impacted
by
this
change
yeah.
I
think
there
is
a
follow-up
conversation
to
be
had
around
what
we're
doing
with
artifacts.
Specifically
as
prs
are
merged,
we
should
probably
have
a
location
where
you
can
get
at
those
artifacts.
If
maybe
a
bug
fixes
come
in
that
you
need
right
now
before
release
is
cut.
B
It's
just
not
documented,
but
it's
all
there
like.
If
you
look
at
the
actions
like
we
push
a
github
package,
whatever
github
called
it,
we
didn't
upgrade
to
the
not
being
I
don't.
The
github
stuff
has
like
been
underwhelming,
but
anyway
it
does
go
off.
So,
like
I
hear
you,
it's
just
not
documented
so.
E
Right
so
we
should
have
a
it
documented
and
and
be
open
that,
where
that
exists-
and
I
think
that's
true-
for
kate
sanders
as
well-
for
what
it's
worth.
A
Speaking
up
got
an
update
for
kate
sanders,
yeah
nice
segway.
E
Back
up
that
one
was
released
that
had
some
some
changes
in
it
and
the
helm
charts
for
1.5.1
were
published.
As
of
yesterday.
That's
something
else
that
needs
to
be
automated.
My
apologies
on
the
I
think,
week-long
delay
on
that.
It's
probably
closer
to
two
weeks.
Coveted
time
is
weird.
E
That's
all
I
have
for
casper,
but
kate,
sandra
john
or
jeff.
Do
you
want
to
take
the
lead
on
that.
D
One
sure
so,
let's
see
there's
we
haven't
been
publishing
any
recently
updates
to
the
charts,
because
there's
just
been
a
lot
of
turn
with
changes
and
I
got
pain
by
a
couple
people
internally
with
data
stacks,
where
they
ran
into
saying
hey
this
something
broke,
and
I
thought
well
just
to
cut
down
on
on
those
transitory
errors,
but
so
there's
a
bunch
of
things
in
flight.
D
I
think
today
later
today,
some
point
after
the
meeting
I'm
gonna
go
ahead
and
push
an
update
with
some
of
those
changes
so
working
towards
the
1.0.
There's,
let's
see
one
of
the
things
trying
to
do
so.
We
have
support
now
for
3,
11,
7,
8,
3,
11,
7,
8
and
9,
and
we
need
to
submit
a
pr
eric.
D
Isn't
that
on
the
call,
I
don't
think
he
is
to
update
caz
operator
to
add
support
for
beta
four,
the
there's,
a
validation
check
in
the
crd
that
the
regex
needs
to
be
updated.
So
it's
a
real
trivial
thing
so
that
we
can
support
running
with
the
latest
beta
and
then,
let's
see
some
initial
authentication
support
is
almost
done
in
some
ingress
changes
as
well.
D
So
I
think
that's
at
a
high
level.
Oh
another
thing
that
we
did.
I
don't
know
for
anybody.
That's
been
done
much
time
with
with
helm.
You
know
we'd
love
to
hear
any
any
feedback
or
thoughts
on
this.
So
we
kind
of
we've
gone
through
just
kind
of
a
mat
or
still
going
through
a
maturation
process
of
starting
out
with
one
or
two
charts
and
kind
of
lumping
all
the
various
components
into
it.
D
So
the
we
have
the
the
cas
operator,
the
very
multiple
operators
getting
deployed
within
that
one
chart,
the
you
know,
reaper
prometheus,
all
that
and
we
just
after
kind
of
kind
of
taking
a
step
back
and
looking,
and
some
discussion
felt
that
a
better
approach
a
little
bit
more
sound
engineering,
we
kind
of
refactor
and
start
breaking
those
separate
components
out
into
their
own
charts.
D
So
that's
been
going
on
as
well
and
that'll
also
make
it
also
facilitate
some
other
things,
making
it
easier
for
the
from
the
end
user
getting
into
different
topology
situations.
So
if
you
want
to
deploy,
keep
sandra-
and
let's
say
you
want
to
use
a
cluster-wide
cluster
scope,
deployment
of
casa
operator
makes
it
a
little
bit
easier
to
do
that.
You
can
go
ahead
and
independently
if
or
or
using
the
components
independently,
and
so
it
hits
some
interesting
say
nuances
with
helm
along
the
way.
D
Hopefully
I
get
a
chance
to
document
or
write
some
of
those
things
up
at
some
point,
but
so
that
that's
a
that's
a
big
change,
we'll
see
that
or
if
you've
been
paying
attention.
The
github
repo
you'll
see
that
there's
several
tarps
now
and
there's
going
to
be
a
couple
more
one
for
stargate.
D
D
In
fact,
I
had
created
a
ticket-
I
don't
know,
maybe
a
couple
weeks
ago
about
having
a
another
chart,
repo
so
we'll,
let's
just
call
the
current
one
for
stable,
so
so
that
one
would
so
any
any
time
a
pull
request
is
merged
for
a
chart
that
the
current
repo
would
be
updated
with
that
chart
version,
and
then,
let's
say
when
we
go
to
cut
a
release
of
kate
sandra
the
stable
repo
is
updated
and
that's
what
I
think
we're
trying
to
move
towards.
D
There's
some
work
there.
That
has
to
be
done
with
the
github
actions
to
to
facilitate
that
that
make
it
a
little
bit
easier,
though,
for
so
like
the
scenario
I
described,
what
I
the
example
I
gave
earlier,
where
somebody
was
running
into
some
issues
because
of
the
changes
I
was
introducing
they
they
said
oh
they're,
using
just
grabbing
the
latest
chart
version
available,
and
things
were
breaking
as
a
result
of
changes
we're
making.
D
D
E
It's
and
yeah
another
feature
that
well
so
in
that
vein,
yeah,
that's,
I
think,
that's
important.
It's
a
sign
of
the
project
maturing
to
have
to
deal
with
some
of
these
issues
and,
just
like
we
said
about
kaz
operator,
how
we're
pushing
artifacts
out
for
specific
pushes
to
those
main
branches,
I'd
like
to
see
something
similar
with
case
sandra
and
not
affect
stable
right,
because
that's
that's,
certainly
a
concern.
E
There
are
some
discussions
happening
around
how
dependencies
are
stored.
Chart
dependencies
are
stored
right
now.
The
tarballs
of
those
charts
are
in
repo
and
those
are
going
to
get
dropped.
There's
going
to
be
some
documentation
coming
out
about
how
to
get
the
latest
version.
It's
home
depth
update
it's
the
right
command.
E
I
just
we
want
to
make
sure
that
it's
very
clear
how
to
get
those
those
dependencies,
such
as
keep
prometheus
stack
separately,
though
there's
been
a
lot
of
work
in
the
stargate
integration
space,
so
the
the
pr
for
that
is
merged,
but
there's
also
been
a
pr
merge
for
ingress.
E
That's
on
the
main
branch
right
now,
and
so
any
of
the
http
apis
for
talking
to
stargate
are
now
using
native
kubernetes
ingress
objects,
so
whatever
ingress
you're
using
it
should
just
work
ingress
with
tcp
is
weird
on
kubernetes
you,
you
kind
of
have
to
go
your
own
with
custom
resources
based
off
the
ingress
controller
you're
using
we've
done
a
little
bit
of
research
here.
E
The
gateway
api
looks
like
that's
going
to
be
a
solution
down
the
road,
but
I
think
that's
still
kind
of
forming
now,
if
you're,
following
along
with
the
the
head
of
kubernetes,
so
one
of
the
things
that
came
up,
that
was
an
interesting
edge
case.
That's
worth
talking
about
is,
if
I
want,
if
I'm
running
cassandra
on
kubernetes
and
I
have
applications
outside
of
cassandra,
there
are
a
few
ways
to
talk
to
it
or
outside
of
I'm
running
cassandra
and
kubernetes,
and
I
have
applications
outside
of
kubernetes,
not
cassandra.
E
How
do
they
talk
to
the
cluster?
Well,
there's
node
port
and
there's
ingress
right
now
or
you
can
do
some
really
cool
things
with
networking,
but
it's
tricky.
E
We
have
documentation
around
how
to
run
with
ingress
and
it
works.
It's
pretty
interesting
how
it
works
with
with,
if
you
decide
to
use,
sni
and
tls
and
all
that
stuff,
but
when
we
bring
stargate
into
the
mix,
that's
where
things
get
interesting
and
we're
trying
to
think
through.
If
there
is
a
case
where
you
would
want
to
expose
both
the
cassandra
cluster
nodes,
as
well
as
the
stargate
nodes
to
the
ingress
or
if
you
are
using
an
ingress
and
you
are
using
stargate,
do
you
need
to
expose
the
consent,
the
main
cassandra
cluster?
E
Hopefully,
yeah
fair
enough,
so
do
you
think
that
there
is
a
use
case
where
you
would
want
to
expose
both
of
those
things,
or
do
you
think
if
stargate
is
deployed
and
available
via
ingress,
there's
no
reason
to
expose
the
cassandra
cluster
native
port
since
it
stargate
being
the
it
here,
also
exposes
the
native
port
for
communicating
with
the
cluster.
G
Wanting
to
do
it,
I
mean,
if
it's
like
super
hard
to
allow
people
to
expose
them
both
on
different
services
or
whatever.
Then,
okay,
but.
A
E
E
Maybe
a
little
bit
cleaner
and
a
little
bit
more
well
documented
for
for
external
usage
right
now,
it's
kind
of
it's
there.
You
can
use
it,
but
you
kind
of
have
to
dig
through
the
weeds
to
figure
out
how
to
create
the
ssl
context
that
uses
the
sni
and
then
you
have
to
create
the
sni
en
points
based
off
the
host
ids
and
that's
not
user
friendly
at
all
at
the
moment,
and
I'd
like
to
see
some
cycles
there
in
that.
So
that's
leads
to
another
follow-up
thing.
E
There
were
some
discussions
recently
about
host
ids
and
well
more
more
specifically
how
to
do
that
kind
of
routing,
and
if
you
want
to
use
studies
as
a
unique
identifier
for
nodes,
if
you
want
to
use
the
hostname
that
this
the
stateful
sets
are
creating
the
ordinal
host
names
for
for
doing
that,
tls
routing
like
what's.
What
do
you
guys
think
is
the
right
way
to
go
about
this
any
any
thoughts
in
that
space,
because
it's.
E
A
A
A
I
I
I
really,
and
actually
this
is
a
call
to
action
that
I
think
everyone
here
should
take
to
heart.
Is
we
need
more
users
in
here
that
are,
you
know
where
I
think
we
are
users,
flash
practitioners,
you
know,
so
we
we
do
both
but
we're
building
and
using.
At
the
same
time,
I've
been
trying
to
encourage
a
lot
of
other
groups
that
are
interested.
You
know
they're
like
putting
up
their
hand
to
show
up
at
this
meeting
it's
really
hard.
A
This
is
like
trying
to
get
someone
to
show
up
at
a
meeting
like
after
after
dinner.
You
know
it's
like
I'm
gonna
go
lay
on
the
couch,
but
the
this.
I
think
this
is
where
this
group
needs
to
start
evolving
to,
because
there's
a
really
good
chance
that
we'll
get
some
good
interchange
here
and
I'm
not
saying
that
you
know
we
don't
have
great
opinions
here,
but
it
I
really
do
love
the
the
breadth
that
we
could
potentially
get.
F
B
F
Problem
because
we
I
mean,
I
look
at
our
contribution,
we're
coming
from
cascope,
which
we're
still
developing
and
now
we're
trying
to
get
into
a
gas
operator
and
try
to
make
it
work,
but
then,
at
the
same
time,
okay,
you
went
over
to
cassandra,
so
we
had
to
look
around
at
kate
sandra
and
now
we
saw
that
for
the
backup
and
restore
it's
made
user.
So
we
go
to
medusa
and
we
try
to
it
works.
And
now
we
come
to
meeting.
F
We
talk
about
stargate,
which
is
another
another
part
of
the
problem,
so
it's
quite
difficult
to
have
people
that
can
follow
you
on
all
the
different
parts
where
we
don't
know
any
of
them
in
the
first
place.
So
it's
that's
why
we
can't
say
anything.
I
mean
you
don't.
F
Agree,
try
to
put
yourself
into
other
people's
shoes
and
okay.
We
actually
we
looking
at
releases
of
cash
operate
too
and
we're
saying:
oh,
we
haven't
looked
at
stargate
yet,
but
well
we
just
don't
have
time.
We
can't
do
that.
I
mean
it's
we
would
like
to,
but
yeah
just
don't
really
know
what
it
is
in
the
first
place.
A
No,
that's
fair,
and
I
remind
everyone
that
this
is
also
being
recorded
and
we
do
get
a
lot
of
views
on
that.
So
you
know
there
are
people
that
get
updates
and
that's
that's.
Okay.
They
they
love.
They
must
love
getting
a
weekly
update
of
our
brady
bunch
right
here.
I
can
see
on
my
screen,
but
but
that's
that's
good
feedback.
Frank.
A
It
almost
makes
me
think
that
we
have
two
different
kinds
of
these
sig
meetings:
one
that's
just
user
based
and
and
one
that's
more
like
contri,
like
you
know,
we
have
a
user
and
a
dev
mailing
list
where
we
have
a
user
and
a
builder
or
user
in
a
dev
meeting
that
that
is
a
lot
harder.
The
cncf
does
something
like
that.
I
I
don't
know
how
well
that
works
for
them,
but
I
don't
know
what.
How
does
that
sound?
Or
what
does
that
sound
like
to
everyone
here.
G
A
Yeah,
well,
I
did
that
I
did
that
on
the
user
mailing
list,
I
I
said:
here's
the
here's,
my
proposed
agenda,
you
know
it'll
be
like
some
updates
on
various
projects
and
then
at
the
end,
bring
your
questions
and
we
and
we'll
answer
them.
Yeah
crickets
and
it's
hard.
You
know.
G
D
But
the
the
the
point
is
good,
I
mean
I
think,
there's
discussions
that
are
more
user
yeah,
the
user
focused
and
developer
like
there's
something
I'm
just
trying
to
find
a
ticket.
The
case
standard
repo,
we're
having
discussion
and
around
okay.
What
kind
of
default
settings
do
we
want?
What
are
we
targeting
for?
D
You
know
for
when
we
things
that
are
installed
for
defaults
for
various
components,
namely
cassandra
and,
and
you
know,
we
want
to
target
developer
setup
that,
and
you
know
it's
trying
to
figure
out
what
what
those
same
things,
what
are
saying,
defaults
and
valerie.
She
chimed
in
from
from
on
the
ticket
from,
I
think,
she's
from
pythian
yeah
she's
in
the
raleigh
area.
D
Yeah
I've
talked
chat
with
her
a
little
bit
on
on
linkedin
and
made
a
comment
about
yeah
doing
some
testing
or
something
else
we're
just
trying
to
cram
docker
cassandra
nodes
in
the
docker
containers
and
testing,
and
I
think
we've
all
felt
that
pain
at
different
points
when
you
know
you're,
I
know
on
on
previous
projects
the
previous
job
we're
trying
to
figure
out
okay.
How
can
I
tune
cassandra
down?
D
So
it's
not
gonna,
you
know,
because
I
can
run
three
or
four
nodes
locally
for
doing
different
things
so,
and
I
think
that's
applicable.
You
know
here
it's
people,
whether
it's
working
in
cape
sander
or
on
the
operator
being
able
to
do
things
reasonably
and
it
requires-
and
I
think
to
make
that
happen-
requires
a
little
bit
more
precision
and
understanding
with
how
to
what
dials
and
knobs
to
turn
in
cassandra
to
to
do
that.
D
Depending
on
what
you're
trying
to
test,
I
mean,
if
you're,
just
trying
to
deploy
a
cluster
in
kubernetes
and
and
making
some
configuration
change
with
within
your
stateful
set,
and
maybe
it's
sufficient
to
just
make
sure
the
cluster
is
up
running
then
I
don't
think
you
need
to
have
each
node
running
with
a
good
heat.
You
know
you,
but
so
there's
I
was
looking
for
the
ticket.
I
can't
I
couldn't
find
it,
but.
D
A
Closed
won't
fix
yeah
one
of
my
favorites
out
of
scope,
all
right.
I
I
had
one
other
little
item
that
came
up
this
week
with
me.
I
was
working
with
a
company
called
whitfo
and
they're
doing
some
cassandra
4
testing
with
containers
they've.
They
found
some
pretty
interesting
settings,
changes
that
might
go
into
the
cassandra
project.
Jd.
You
know
about
these.
You
saw
some
of
that
traffic,
but
just
the
fyi.
A
I
know
since
we're
all
pretty
much
running
cassandra
and
containers
just
just
to
let
you
know
this
is
for
on
beta
3,
not
beta
4..
I
got
a
huge
download,
so
I'm
gonna
go
pick
through
it
and
see
what
changes
are,
but
essentially
what
it
is
is
some
changes
in
like
jvm
settings
for
jdk
11,
which
are
looked
like
some
stuff
got
carried
over
from
jdk8,
which
aren't
really
relevant,
so
I'm
I'm
gonna,
look
through
it,
but
just
I
think
this
is
a
crowd
that
would
probably
appreciate
some
changes
to
the.