►
From YouTube: Tanka Community Call 2020-08-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
I
think
I
think
one
of
the
things
that
we
could
start
by
mentioning
is
something
that
you're
on
on
the
call
has
been
playing
with
it's
just
an
idea:
we've
been
banding
around,
which
is
I
mean
it
was
a
delightful
bit
of
naming
a
sort
of
confusion.
I
I
I
named
it
helmerizer
and
euron
misheard
me
and
came
up
with
the
name,
helm
razer,
which
is
far
better.
B
So
what
it
does
is.
I
think
it's
only
a
bash
script
at
the
moment,
but
it
consumes
a
helm
chart
and
creates
a
tanker
library
from
it
that
can
track
the
helm
charts
through
time.
I
don't
know
if
you
want
to
talk
about
it.
C
Yeah,
I
can
show
something:
it's
a
combination
of
a
bash
script,
500
lines
and
that
and
I'll
just
pop
it
on
the
screen.
C
C
C
Here
it
is
so
it's
a
combination
of
script
and
lib
songs
file
that
does
some
magic
layover
over
that
to
do
the
templates
and
the
key
action
is
here,
so
we
do
help
template.
We
put
some
names
for
the
release
and
the
name
space
them
like
template
tags,
which
I
got
from
another
colleague
that
that
started
that
wrote
it
for
a
specific
helm
chart.
C
I
just
generalized
the
idea
and
I
output
the
the
yaml
files
with
values
and
whatever,
if
you,
if
you
want,
you
can
generate
that
with
certain
values,
hopefully
build
something
that
is
generic
enough.
That
can
be
used
within
your
organization
and
then
you
can
use
that
to
reuse
it.
C
So,
let's
show
a
bit
of
the
code
that
goes
in
here,
so
it
does
a
few
small
things
like
it
groups
the
custom
resource
definitions
together,
so
you
can
easily
cut
those
out
if
you
don't
need
them
and
then
replaces
all
the
variables
with
the
prefix
and
the
config
name
and
the
namespace
in
a
clunky
way,
configures
it.
So
that's
a
function.
C
Let's
show
this,
the
the
library
itself
start
manager.
It
generates
this
file
which
make
sure
that
every
manifest
file
comes
into
a
json
construct,
which
you
can
use.
C
The
idea
is
that
you
pull
that
in
like
this
generated,
you
can
configure
it
further
with
something
else,
and
then
you
can
use
other
functions
to
patch
it
up
or
to
pull
it
in
somewhere.
In
your
main,
the
idea
is
that
this
file
is
something
that
doesn't
change,
so
the
generation
doesn't
touch
this.
C
So
this
is
basically
your
overlay
that
you
can
put
to
generalize
a
library
for
use
somewhere
else
in
your
organization,
so
add
namespaces
or,
for
example,
we
have
the
crds
that
we
don't
want
to
end
do
anything
labeling
on,
but
we
want
to
patch
the
labels
on
all
the
other
things.
So
we
do
that
here
and
this
can,
if
we
update
the
short,
we
can
just
generate
a
new
one
with
the
new
version
for
a
cert
manager,
and
this
one
should
keep
on
working.
C
C
That's
that's
all
I
had
to
say
about
it.
If
you
have
any
questions
feel
free,
I
hope
to
open
sources
very
soon
already
was
talking
about
malcolm
earlier.
B
I
think
the
way
we
you
described
it
to
me,
which
made
a
lot
of
sense,
is
this
is
not.
You
would
not
necessarily
use
this
as
a
library
to
generate
a
public-facing
json
version
of
the
helm
chart.
It's
more.
That
you'll
like
because
a
helm
chart
will
contain
lots
of
conditionals
that
deploy
things
in
different
ways
across
you
know
many
different
varieties
of
infrastructure,
but
for
a
particular
organization
they
may
well
have
used
this
to
create
one
that
suits
them
yeah,
and
so
it's
going
to
be
an
internal
thing.
C
Or
you
can
make
them
public
but
make
sure
that
you
say
this
is
a
different
version
like
it's
not
not
to
replace
the
chart
it's
to
make
the
conversion,
so
you
can
plug
it
into
into
existing
json
systems.
A
Are
you
planning
to
put
this
as
part
of
the
of
the
tanker
repository
or
in
a
tool,
slash,
subdirectory
or
something.
C
C
A
lot
of
free
response:
we
can
build
something
like
more
stable
than
just
a
simple
bash
script,
we'll
see
how
that
flows.
A
I
I
think
getting
it
out
and
just
having
people
play
with
it
and
making
people
aware
of
it
is
already
half
of
what
you
want
to
do
and
then,
if
it
really
needs
to
be
put
into
a
go
thing
or
something
you
can
still
figure
that
out.
But
it's
I
in
my
opinion
more
about
getting
people
to
use
it
because
it,
it
seems
quite
useful
to
me.
B
I
mean
it
solves
the
specific.
The
specific
problem
is,
you
know
the
the
con,
the
cert
manager
library
is
huge
and
it
probably
took
a
few
days
to
convert
it
from
helm,
json
it
and
then
they
went
and
updated
the
darn
thing.
B
B
D
B
B
I
don't
know
if
everyone's
heard
of
grizzly,
yet
it's
an
equivalent
to
tanka,
but
instead
of
writing
to
tanka
sorry
to
kubernetes,
it
pushes
to
grafana
to
a
grafana
instance.
So
we've
got
lots
of
ideas
about
things
we
can
improve
about
it.
My
next
one
that
I'm
going
to
do
is
to
make
it
so
that
it
supports
dash
data
sources
and
dashboards.
B
But
part
of
the
thing
is,
if
you
imagine,
you
have
a
you're
working
on
developing
a
mixing
or
something
like
that.
It
contains
a
dashboard
and
that
dashboards
get
gets
deployed
to
a
large
environment
that
is
a
or
large
namespace
and
rendering
the
whole
thing
and
differing
across
your
kubernetes
namespace
takes
a
minute
and
a
half
that
makes
it
not
a
very
efficient
development
process.
B
So
with
grizzly
you
can
push
directly
to
the
to
the
grafana
provisioning
api.
It's
even
got
things
like
watch,
so
you
can
do
group
that
was.
This
was
all
tom
brack's
idea
that
grr
is
the
shorthand
grr
watch.
B
It's
got
the
same,
apply,
show
diff
syntax,
but
also
watch
will
watch
for
check
file
changes
on
disk
and
when
it
notices,
one
it'll
push
it
straight
up
to
the
api.
So
as
soon
as
you
save
your
file,
your
dashboard
is
re-rendered
and
pushed
up
to
grafana
at
the
moment.
You'll
have
to
reload
the
page,
unfortunately,
but
still
it's
it's
likely
to
be
much
much
simpler
than
it
was
yeah.
So
I
don't
know
if
that's
something
that's
useful.
B
C
I
I
I'll
post
here
this
script
that
can
be
used
to
to
wrap
and
grizzly
watch
and
reload
like
firefox
or
something
with
x11.
C
E
E
Personally,
I
see
quite
a
lot
of
potential
in
the
tool
we
were
thinking
about,
modifying
like
doing
something
on
top
of
monitoring
mixins,
and
I
think
this
solves
at
least
one
part
of
that
we're
thinking
of
creating
something
that
would
be
like
monitoring,
bundles,
basically,
which
is
monitoring
mixing
but
already
rendered
for
your
cluster.
B
Yes
and
I've
noted
that
specific
need
as
well,
because
because
at
the
moment
you
we
we
have,
the
mixing
is
an
agnostic
thing.
It
says
here
are
the
dashboards.
Here
are
the
prometheus
rules,
but
then
you
need
to
also
in
order
to
make
it
work.
You
also
need
to
specify
your
data
sources,
maybe
your
notification
channels.
Maybe
you
need
to
specify
your
your
graphing
users,
blah
blah
blah
blah
blah,
but
there
is
no
standard
way
to
specify
those.
B
E
Yeah
that
that
was
the
idea
to
have
the
agnostic
make
sense
and
having
a
bundle
that
is
kind
of
opinionated
towards
one
of
basically
creating
a
package
management
for
monitoring.
Mixins.
Something
like
this.
So
you
can
put
your.
E
Definitions,
your
configuration
into
that
and
pre
and
generate
a
bundle
for
for
the
environment.
I
see
that
this
is
like.
I
wonder
if
we
can
extend
that
to
prometheus,
so
it
can
ingest
the
whole
thing
and
configure
configure,
also
alerts
and
recording
rules,
because
that's
the
last
last
part
for
monitoring
mixing.
E
Yeah
so
in
monitoring
mix-
and
you
have
like
two
parts,
which
is
the
grafana
dashboard,
and
I
think
this
takes
care
of
that
part.
But
the
second
part
is
your
alerts,
and
a
lot
of
monitoring
mixes
come
with
a
configuration
options
for
those
alerts
like
excluding
some
some
jobs,
excluding
some
labels,
basically
a
label
selectors
or
thresholds
and
creating
a
configuration
creating
some
sort
of
opinionated
configuration
management
for
those
creates
a
basically
creates
a
bundle.
E
So
we
were
thinking
previously
about
creating
something
like
I
think,
matthias
created
the
site.
That
is
prompt
tools
for
slo
to
generate
alerts
and
we're
thinking
about
creating
something
like
this
cross
referenced
with
monitoring
mixin's
site,
which
would
be
get
data
from
monitoring
mixing
and
go
to
a
site.
E
E
B
Fantastic,
I
mean
you
know
in
terms
of
prometheus.
You
know
prometheus
is
a
difficult
thing
to
configure,
because
it
it
just
wants
one
config
file,
but
but
there
may
be
ways
we
can
work
with
that
I
mean
I
I
something
I've
been
playing
with
for
ourselves
is
mounting
each
mix
in
the
the
conflict
files
for
a
mix
in
for
each
mixing
into
their
own
config
map.
B
At
the
moment
we
put
them
all
in
one,
but
putting
them
each
into
one
complement
makes
it
a
bit
more.
It
gives
a
little
bit
of
it.
Extra
independence.
B
Great
you
know
I'll,
you
know
I'll
say
it's
it's
open
source.
B
The
the
only
thing
I
will
throw
out
is
it
is.
It
is,
I'm
not
sure
if
there
is
such
a
term
as
pre-alpha,
but
if
it's
it's
very
young
and
you
know,
and
and
we're
doing
a
lot
of
talking
about
this
whole
space,
so
exactly
what
shape
we'll
want
it
to
go
where
we
want
it
to
be
and
so
on.
Who
knows,
but
it's
getting
the
idea
out
there.
C
I
just
shared
the
link
to
the
elm
race
script.
C
It's
an
apr,
still
draft,
so
yeah
just
play
with
it,
see
what
it
is
and
we'll
probably
do
some
ex
some
example
like
throughout
the
week
somewhere
with
certain
manager.
C
B
B
He
was
looking
at
how
was
he
looking
at
helping
the
consumption
of
j
sonic
within
helm,
charts?
Yes,
he
did
didn't
he
because
he
was
doing
he
was.
B
If
you
take
a
json
and
you
export
the
underscore
config
object
into
a
values
file
and
then
replace
all
of
your
underscore
config
entries
with
curly
curly.
You
can
basically
render
a
helm
chart
from
a
jsonic
library.
B
C
I've
been
working
this
week
on
refactorings
to
make
our
internal
and
also
some
public
libraries
to
use
tom's
gate
s,
library,
which
removes
some
like
type
usage,
and
things
like
that
is
there.
Are
you
guys
using
the
new
case
on
it
thing
from
tom.
B
So
did
so,
I
have
either
of
you
used
the
old
case
on
it.
K,
dot,
lipsonic
library
was
that
something
that
was
a
part
of
your
workflow.
B
Yep
so
basically,
k
summit
was
deprecated
ages
ago,
a
year
and
a
half
to
two
years
ago
and
then
discontinued
probably
a
year
ago,
so
we
jumped
in
finally
and
and
replaced
k,
k,
sonnet
the
tool
as
with
tanker
yay,
we
can
carry
on
going,
but
we've
continued
relying
on
the
the
k,
dot
live
summit
library
for
forever
since
since
then,
but
it's
a
deprecated
thing
so
tom
jumped
tom
brack
jumped
in
and
started
helping
to
maintain
that
library
and
managed
to
get
some
releases
that
were
available
for
apis
above
one
kubernetes
1.8,
for
example,
just
to
show
how
advanced
it
was.
B
But
he
found
that
that
code
base
the
the
gener
the
code
base
that
generates
the
library,
the
jsonnet
library
from
the
kubernetes
swagger
to
be
too
complex.
So
he
started
again
and
he's
now
created,
I'm
sure
euron
can
post
the
link.
Oh,
he
already
has
there.
You
go
he's
read
my
mind,
this
library,
which
basically
replaces
that,
so
it
gives
you
access
to
your
kubernetes
resources.
B
It's
a
mostly
ace,
mostly
a
drop-in
replacement
for
the
case
on
it
library.
But
not
only
does
it
come
with.
You
know
a
slightly
cleaner
api.
It
also
comes
with
and
versions
for
each
kubernetes
api
version.
It
also
comes
with
with
docs,
so
he's
created
a
thing
called
docs
on
it,
which,
if
you
put
a
certain
kind
of
annotations
in
it'll,
give
you
it'll
generate
documentation
for
you,
which
was
always
a
really
really
difficult,
painful
thing
working
with
jsonnet
before
that,
certainly
json
kubernetes,
and
this.
B
B
So
it's
definitely
worth
trying
out
and
giving
some
attention
to.
If
you
use
the
previous
library.
B
B
I
suspect
so.
B
I
suspect,
if
we
we
seem
to
have
you
know
not
not
too
much
more
to
discuss.
I
think
the
topic
that
a
topic
that
we
could
continue
to
explore
is
is
that
one
of
because
I
know
there
is
some
common
commonality.
There
is
around
monitoring
mixins
the
it's
a
topic
that
we
are
at
grafano,
exploring
a
lot
and
you
know
I
would
appreciate
hearing
more
how
other
people
are
using
them.
B
You
know
I
I
know
you
pavel.
Are
you
using
monitoring
mixings
come
on
yep?
Do
you
want
to
say
a
little
about
how
you
use.
D
Them
yeah,
I
meant
basically
it's
the
same
thing
with
pavel,
so
we
are
just
like
I'm
also
maintaining
tunnels,
mixing
and
upstream
thanos
mixing
and
we
bought
using
all
of
those
in
our
stack.
B
Well,
what
do
you
mean
actually
a
rice
structure,
so
so
you
you're
you're,
saying
you
work
on
a
thanos
mixing
yep
is
that
is
that
as
in
to
monitor
thanos
or
that
deploys
configuration
in
order
to
set
up
thanos
for
other
applications,.
D
Tunnels,
mixing
is
just
for
the
monitoring
part.
We
also
have
another
project
called
cube
tunnels,
and
for
that
we
just
use
that
report
to
deploy
to
basically
provide
deployment
manifest
for
kubernetes.
B
D
Like
they,
they
are
basically
deployment
scripts,
nothing
fancy,
but
just
we
want
to
define
something
too
easily.
You
can
just
put
your
cluster
and
you
can
just
spin
off
some
tunnels
and
yeah.
They
are
mostly
like
more
or
less
it's
for
our
stack
right
now.
What
we
use,
then
we
just
like
provide
an
option
to
the
community.
E
Yeah:
okay,
yes
for
monitoring
mixins,
I
can
share
some
insight.
We
brought
some
thought
about
creating
a
second
draft
for
the
monitoring
mixins.
E
The
main
issue
was
about
global
state
for
the
mission,
and
what
we
found
out
is
that
actually
the
v1
documentation
and
v1
reference
dock
4,
it
makes
sense,
doesn't
state
anything
about
global
state,
which
means
it
it
should
be
fine
and
that
the
only
problem
here
is
that
we
usually
basically
created
a
bad
bad
practice
of
creating
a
global
state
for
all
monitoring,
mixins,
and
the
next
plan
is
for
us
to
modify
cube
prometheus
project,
which
is
like
one
of
the
main
consumers
for
a
lot
of
monitoring
mixins
to
get
rid
of
that
global
state
and
promote
some
better
practices
of
using
monitoring
mixers
and
make
possibly
creating
them.
B
Absolutely
so
I
mean
what
we
we
went
through
this,
probably
like
six
month
process
of
complaining
and
moaning
about
global
state
and
how
horrendous
it
was
and
how
disastrous
and
terrible
and
how
these
live.
It
was
disgusting
how
these
libraries
forced
us
into
using
this
global
state
and
writing
lots
and
lots
of
long
documents
about
how
he
wanted
to
rewrite
everything
and
then
gradually,
over
a
period
of
time
of
moaning
and
complaining.
B
We
actually
realized
that
you
can
consume
these
like
these
libraries
themselves,
don't
actually
declare
or
define
or
state
anything
about
global
state.
All
they
require
is
a
dollar
underscore,
an
underscore
config
in
their
route.
B
Exactly
and
it
took
us,
I
think
it
took
us
quite
a
while
to
get
to
that
simplicity,
so
we
we've
been
moving
away
from
in
in
our
library
prometheus
case
on
it,
moving
away
from
you
know,
each
library
being
just
added
into
the
global
route
and
having
them
in
their
own
mixing
space
and
and
suddenly
everything
so
much
cleaner,
and
you
can
do
some
really
powerful
things.
B
You
know
in
kubernetes
libraries
as
well,
where
you
have
a
global
route,
but
then,
rather
than
adding
that,
rather
than
just
say,
allowing
a
library
to
reach
out
to
the
global
route
you
pass
in
you.
You
say
my
library,
colon,
my
library,
which
is
an
import
curly
bracket,
don't
underscore
global
a
config
rather
and
then
you
import
the
the
global
config
for
stuff.
B
B
Yeah,
could
you
you're
on?
Could
you
show
that
cloud
sql
pr?
This
is
an
internal
bit
of
code,
but
it
just
showed
what
we
worked
out
really.
Well,
I
mean,
what's
something
I'm
I'm
thinking
of
is
I'd
like
to
at
some
point,
do
sort
of
put
put
something
a
bit
more
public
about
actually
how
to
write:
we've
overused
the
term
effective
jsonnet,
but
how
to
how
to
write
jsonit.
B
Well,
how
to
like,
like
things
like
you
know,
in
our
current
prometheus
case
on
it
you
say,
add
a
data
source,
so
it
takes
your
parameters,
renders
it
as
yaml
and
then
adds
that
to
a
map.
B
So
there's
lots
of
things
that
we're
learning
about
how
to
be
really
powerful
and
effective,
with
jsonnet
that
that
aren't
obvious
and
as
far
as
I
can
see,
aren't
documented
out
in
the
world
and
I
think
could
be,
could
really
benefit
the
world
beyond
grafana
as
a
as
a
jsonic
consumer
or
the
tanker
ecosystem,
or
anything
I
mean
it'd,
be
really
useful
to
get
some
of
that
out.
B
C
B
C
Like
add
mixins
or
anything
else
like
you,
if
here
you
have
the
cloud
sql,
I
can
so
what
what
I
wanted
to
do
was
actually
this.
C
C
C
E
Yep,
that's
exactly
what
we
plan
to
do
in
cube
prometheus.
However,
it
might
be
a
bit.
It
will
be
quite
an
endeavor
to
do
it
in
cube
prometheus
because
it
organically
grew
into
something
that
is
quite
large
and
covers
a
lot
of
edge
cases.
E
C
Yeah,
this
is
also
the
the
nice
thing
about
this.
We
can
do
it
gradually,
so
we
just
move
one
bit
at
a
time
like
now.
I
just
did
the
cloud
sql
one,
because
it
crossed
my
path
and
need
some
small
refactoring
for
using
it
with
fault
and
yeah.
Then
you
do
that
and
you
ignore
all
the
rest,
because
it
still
works,
it
can
still
stay
in.
Everything
can
stay
in
the
root
scope
until
you.
You
need
to
change
that.
So
it's
it's
very
agile.
E
Yeah,
the
problem
for
us
is
that
the
main
consumer
from
our
side
is
cluster
monitoring
operator.
This
is
the
main
consumer
for
cube
prometheus,
which
is
both
are
having
a
lot
of
json
at
code.
So
everything
that
we
change
in
cube
premiums.
We
need
to
carefully
back
a
part
into
cluster
monitoring
operator
and
that's
why
we
that's
why
this
will
be
quite
an
endeavor
to
actually
mutate
that,
because
we,
we
will
be
doing
probably
that
with
carefully
going
through
each
of
the
components
and
immediately
parting,
that
into
openshift
for
those
components.
B
So
so
I'm
curious
how
much
so
you're
you're
doing
this
work
and
how
much
to
what
extent
do
you
is
this
something
that
you're
you
you
used,
you
used
json
it
internally
that
or
is
it
something
that
you,
your
customers
use
as
well?
Your
customers
find
themselves
using
json
it
as
well,
or
is
it
just
something
you
use
internally
to
maintain
stuff.
E
No,
we
are
all
all
in
on
operators,
so
basically
what
we
have,
what
we
are
using
jsonnet
for
is
to
create
yaml,
manifests
and
bake
them
into
golang
binary.
E
This
is
what
we
deployed
to
customers,
which
is
the
immutable
stack
for
monitoring
and
basically,
what
we
use
jsonnet
for
is
to
get
all
this
all
those
mixens
and
cube
prometheus
keep
thanos
projects
into
one
repository,
mangle
them
up
and
create
yaml
files,
which
we
are
satisfied
with
bake
them
into
golang
and
ship
as
a
container.
E
The
the
thing
that
we
are
baking
into
going
probably
will
change
soon,
because
we
have
like
baking
in
that
amount
of
yaml
and
using
bin
data
for
that
causes.
Some
problems
with
merge
conflicts.
E
So
we
are
in
process
of
changing
this
thing,
but
yeah,
but
we
are
using
json
method
only
for
for
us
basically
for
dependency
management,
and
since
this
is
this
project
is
using
now
prometheus,
mixing
node
exporter,
mixen
cube
prometheus
prometheus
operator.
Crds
are
also
in
jsonnet
kubernetes
mixin.
I
have
no
idea,
maybe
probably
some
even
more
and
it's
all
in
in
one
okay.
B
Interesting
interesting,
so
so
this
is
very
much
a
project
that
just
that
the
whole
mixing
idea
for
you
is
something
that
make.
That
is
it's
a
it's
a
scratch,
your
own
itch
thing:
it's
a
thing
that
makes
life
easier
for
you.
D
In
addition
to
that,
like
we
are
kind
of
like
a
sister
team
to
powell's
team,
we
are
we
built
an
operator
that
also
like
just
deploys
to
json
at
manifest,
like
we
are
just
experimenting
that
idea
that
is
using
a
tool
from
frederick,
so
yeah,
that's
also
a
kind
of
a
we
are
kind
of
using
under
the
hood,
and
we
provide.
We
are
planning
to
provide
this
functionality
to
our
customers
as
well.
D
B
B
I
mean
do
you,
what
do
you
make
of
what
you
hear
of
us
at
grafana
talking
about?
Do
you
feel,
like
you
know,
from
what,
from
what
you've
been
hearing
and
so
on
that
that
we're
heading
down
the
same
track
as
you
do
you
feel
like
it's,
you
know,
would
it
be
useful
to
to
have
more
venues?
I
don't
know
what
what
I
mean
by
a
venue
but
places
to
talk
about
these
things.
B
E
No,
we
literally
just
like
today
kind
of
scrapped
the
idea,
because
we
well
for
version
two.
You
need
something
to
change
and
most
points
that
we
wanted
to
change.
We
figured
that.
Well,
those
don't
need
to
change
because
those
are
already
covered,
and
this
is
only
our
use
case.
That
was
wrong,
so
I
don't
think
there
is
much
to
change
and
to
create
v2.
E
So
v1
is
probably
fine.
What
we,
what
we
found
at
least
what
I
found
is
that
we
have
quite
a
lot
of
things
that
are
not,
but
that
are
very
fluid
in
monitoring
mixing
space.
For
example,
we
have
quite
a
lot
since
we
are
ingesting
quite
a
lot
of
false
mixings.
E
We
have
some.
We
found
some
issues,
for
example
between
prometheus
mixin
and
kubernetes
mixin,
which
are
defining
different
different
annotations
for
alerts.
This
is
not
something
that
is
like
very
important,
but
it's
scratching
some
customers,
it's
like.
Well,
why
do
you
have
annotation
which
investment
annotations?
E
You
have
a
summary
and
description,
and
here
you
have
message:
how
do
I
create
my
alerting
rule
and
alerts
alert
manager
configuration
if
you
have
both
and
those
are
the
things
that
we
probably
want
to
tackle,
to
define
those
those
parts
of
the
of
of
a
spec
to
be
like
more
consistent.
E
And
to
provide
something
better,
better
readability,
maybe
I
know,
but
right
now
we
have
a
few
fields
that
would
that
are
very
fluid
and
you
basically
can
define
what
you
want.
I
know
that,
for
example,
ceph
mixing
defines
quite
a
lot
for
annotations
and
they
mix
and
match
basically
everything
and
it
doesn't
provide
much
value
to
customers.
B
Yeah,
do
you
are
your
mixins
themselves?
What
percentage
of
the
mixins
that
you
consume
are
private
and
which
percentage
are
open
source?
Do
you
are
you
willing
to
you
know?
Do
you
use
too
much?
You
don't
have
to
sound
excuse.
B
C
Now
the
sea
open
shift.
I
have
seen
two
issues
come
on
the
tanker
repository
people
reporting.
C
We
we
are
not
like
used
to
that
and
we
don't
know
what
to
do
with
it.
Also,
we
don't
have
openshift,
because
we
need
this
access
somewhere
to
test
this.
D
E
Yeah,
so
it
it
might
be
strange
because
our
monitoring
stuck
in
openshift
till
version
4.6,
which
I
don't
think
it's
released
even
yet-
is
completely
immutable.
So
if
you
want
to
monitor
your
own
application,
you
need
to
go
through
some
tricks
to
do
that
and
we
prevent
using
the
in-cluster
monitoring
stack
for
this.
That
way,
you
might
have
various
reports
about
openshift,
strange
issues
with
prometheus
or
grafana.
C
E
Oh
yeah,
this
is
actually
fairly
simple.
It's
basically
an
airbag
issue,
so
the
cube,
the
cube
config
that
is
ingested
into
tanka
is
insufficient.
To
do
this.
Basically,
I
don't
think
you
have
like
yeah
in
case
of
openshift.
You
cannot
list
all
namespaces,
because
namespace
in
openshift,
from
the
point
of
usual
user
is
identical
to
a
project,
and
this
is
a
tenant
isolation.
E
It's
a
soft
tendency
model
for
open
shift.
So
if
you
don't
have
enough
permissions,
you
need
to
admin
permissions
for
that.
Actually,
you
basically
cannot
list
all
namespaces
and
that's
why?
How
that's
the
way
it
fails.
C
C
E
Yeah,
so
treating
that
as
a
soft
failure
might
be
a
work
around
for
this.
And
since
you
cannot,
if
you
cannot
list
all
name
spaces,
maybe
just
limiting
the
options
for
the
div
and
maybe
possibly
providing
an
option
to
like
specify
namespace
to
the
tk
binary
to
to
the
div.
So
we
can
use
this
namespace
and
not
to
do
at
least
from
the
cluster,
because
you
actually
either
this
or.
E
I
I
have
no
idea
if,
in
cubeconfig
in
our
cube
config,
we
specify
that
somewhere
what
is
the
airbag
policy?
What
is
allowed
for
namespaces
this?
This
is
something
I
would
need
to
check,
but
you
basically
in
this
case,
you
need
to
somehow
figure
out
what
is
the
name
space
before
doing
the
div.
Otherwise
you
cannot,
and
otherwise
the
airbag
policies
in
openshift
won't
allow
this.
E
So
it's
either
specifying
the
namespace
and
passing
that
to
tk
binary
or
going
through
some
chain
of
configuration
options
and
parsing
cubeconfig
to
figure
out
what
is
your
actual
namespace
that
you
are
using
right
now,
as
this
user?
E
E
C
A
quick
look
and
he
probably
uses
the
same
function.
Yes,
it's
when
it
wants
to
prune.
So
it's
the
it's
the
same
problem.
I
I'm
pretty
sure.
E
E
I
can
already
see
what
is
the
same.
This
is
also
hitting
the
tenancy
model
in
in
openshift.
D
C
I
think
the
call
indeed
happens
because
it
wants
to
check
if
the
namespace
exists,
so
it
just
pulls
all
name
spaces
and
then,
if
it
doesn't
exist,
it
makes
sure
it
can
be
created
and
there's
some
ordering
of
the
diff
and
things
like
that.
So
that's
what
we
required
in
tanka
so
be
interesting
to
work
around
it.
E
There
is
no
check
to
like
pass
the
name
space
and
to
get
the
response,
but
yes,
this
exists,
or
this
doesn't
because
this
will
obviously
like
this
will
be
also
against
the
tendency
model,
because
you
can
actually
get
the
information
if
that
exists.
C
It
was
until
now,
some
guessing
for
us
like
yeah,
it's
probably
stuck
somewhere
there.
So
now
we
have.
Oh,
we
have
internal
information
so.
C
C
Feel
free
to
submit
the
pr.
B
I
suspect,
unless
you
want
any
other
things
to
raise,
I
suspect
not
yeah.
Well,
thank
you
very
much.
It
was
it's
great
to
meet
you
both
yeah
and
you
know,
and
to
hear
more
about
what
the
stuff
you
know.
It's
like
I've,
known,
sort
of
from
a
distance
of
the
stuff
you're
doing
and
it's
so
it's
good
to
hear
in
a
bit
more
detail.