►
From YouTube: 2023-07-13 Scalability Team Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
That
was
a
good
introduction,
Andrew
Tomlin,
so
I'm
I'm,
basically
learning
tameland
and
I'm,
really
enjoying
that.
It's
a
really
good
tool
and
I
started
to
you
know,
look
into
the
parameters
that
we
want
to
add
so
we're
starting
to
want
to
override
some
parameters
at
different
levels
and
I
started
to
look
into
this
and
sort
of
maybe
got
a
bit
excited
about
that
and
I
wanted
to
run
a
couple
ideas
around
that
by
you
and
see
what
you
all
think:
I
likely
have
a
lot
of
gaps.
B
So
please,
please
Point
them
out
as
well
and,
like
Andrew
said
like
we
want.
We
also
want
to
use
terminal
for
dedicated
later.
C
B
And
there's
also
talks
about
using
it,
possibly
for
Runway,
so
there's
lots
of
opportunities
to
think
about
template
as
a
kind
of
a
library
or
a
tool
and
sort
of
think
about
the
inputs
and
outputs
of
that
tool.
And
when
we
talk
about
those
parameters
and
example,
is
we
want
to
be
able
to
change
the
forecast
Horizon
for
a
particular
forecast
right?
So
that
might
happen
on
the
saturation
point.
B
So
we
might
want
to
say
that
all
integer
for
capacity
forecast
should
have
a
longer
Horizon,
for
example,
right
and
then
we
also
want
to
be
able
to
override
those
settings
on
a
service
level
or
a
service
and
component
level.
So
we
want
to
say
that
for
a
particular
petroni
service,
that
forecast
should
be
even
longer
or
different
right.
B
So
there's
different
levels
of
overrides
that
we
would
like
to
make
and
I
was
wondering
where
we
actually
want
to
store
those
or
that
information
and
right
now
it's
a
bit
in
a
bit
of
that
is
in
Timberland
itself.
So
there's
some
yaml
files
with
overwrites
there
and
other
stuff
lives
in
the
runbooks
Repository,
and
what
we?
What
Bob
and
I
talked
about
today
was
to
to
actually
try
to
move
that
stuff
into
the
runbooks
completely.
So
basically,
there
should
be
a
single
Json
filed.
B
B
So
this
is
the
example
that
we
have
so
this
is.
This
is
what
we
have
today.
This
is
Tomlin
Json
net
and
the
Run
books,
and
what
we're
talking
about
is
adding
more
information
to
that
and
we
can
possibly
refactor
that
a
little
bit
but
sort
of
having
the
opportunity
to
adds
those
overrides.
So,
for
example,
we
can
override
forecast
days
for
for
a
particular
component
and
service.
B
We
can
add
events
like
we
talked
about
annotating
the
history
and
saying
like
this
is
where
we
did
a
postgres
upgrade,
for
example,
and
all
those
things
and
generate
that
Json
file,
as
as
input
for
for
template
and
I,
was
wondering
if
that's
the
right
place
to
put
to
put
that
for
stories.
So.
D
D
That's
like
there's
this
resource
and
it's
90
days
and
maybe
there's
defaults
with
some
of
those
values,
but
you
can
also
override
them
on
a
on
a
per
resource
basis
right
and
then,
in
our
case,
like
we
wouldn't
maintain
like
like
a
com,
somebody
at
another
company
who
wants
to
use
it
or
May.
Maybe
someone
in
the
runway
team
could
just
have
a
very
simple
yaml
file
with
like
or
Json
file.
That's
really
static
with
like
five
things
in
it
and
it
never
changes,
but
for
gitlab.com
we
kind
of
beyond
that
scale.
D
So
we
would
generate
that
file
from
our
service
catalog
and
the
metrics
catalog
and
all
of
those
things
together
and
we
could
generate
that
file
in
exactly
the
same
way.
We
generate
Like
rules
for
Prometheus,
like
we
don't
maintain
those
manually
anymore,
because
it's
we've
grown
out
of
that
scale,
but
other
people,
you
know
in
other
companies
that
are
smaller,
could
just
maintain
that
manually.
But
it's
not
a
it's,
not
a
requirement,
and
then
you
know
like
literally
everything's
in
there.
Nothing
is
inferred
from
Prometheus.
D
You
know
you
could
even
go
as
far
as
saying
you
know.
The
the
query
that
you're
going
to
use
to
get
your
history
is
the
following
query
and
in
our
case
we
would
always
use
the
utilization
ones,
but
literally
there's
nothing
in
Tam
land
that
implies
like
this
is.
This
is
the
structure
of
your
data,
like
all
of
that
is,
is
in
the
yaml
file.
A
I
think,
but
Andreas
is
going
to
show,
is
like
that's
what
we're
working
toward,
but
I
think
that
last,
like
the
piece
that
Andrew
is
now
mentioning
where
we
don't
like
right
now.
Timeland
assumes
that
you
will
have
a
metric
that
is
available,
getlab
component
saturation,
blah
blah
blah,
and
then
we
paste
in
whatever
name
that
timeline
gives
I.
Think
in
first
iteration
that'll
remain
the
same
yeah,
but.
A
I
like
where,
where
Andrew
is
taking
this
weird,
that's
actually
part
of
this
timeline.
Dot
Json
configuration
file.
So
then
we
just
get
a
bunch
of
queries
yeah,
so
each
each
service,
a
component
thing
that
you
that
you,
that
you
that
you
mentioned,
will
have
its
yeah
ready
to
execute
and
that's
the
result
that
we're
going
to
be
storing
and
working
with,
like.
E
We
think
about
it
in
terms
of
abstractions.
We
have
some
sort
of
interface
for
data
that
we
sort
of
import
or
whatever
we
want
to
structure
that
like
and
even
if,
in
practice,
it's
always
going
to
be
Prometheus
I
think
that's
still
a
useful
mindset
totally.
That's.
A
D
Nice,
something
that
reminds
me
a
little
bit
of
that
is,
is
open,
SLO
right,
which
is
a
kind
of
a
yaml
file
and
I
I,
do
think
that
yaml
might
be
a
little
bit
more
kind
of
industry
standard,
no
matter
what
you
think
of
yaml,
but
but
open
SLO,
it's
like
a
yaml
file
and
then
inside
there
it's
got
a
query
and
it
doesn't
matter
whether
that
query
is
for
datadog
or
for
Prometheus
or
whatever.
D
It's
it's
just
a
query
and
the
way
that
you
interpret
that
is
kind
of
dependent
on
the
on
the
data
source.
D
A
D
Know,
and
just
like
on
the
case
of
dedicated,
like
I,
was
actually
thinking
if
we're
going
to
start
using
this
dedicated
we're
going
to
need
like
nine
months
worth
of
data,
but
we
don't
have
any
way
to
store
that
data
at
the
moment
and
I
was
thinking
we
could
just
start
storing
it
in
Pocket
files
and
actually
use
timeline
directly
from
that,
because
we
don't
really
know
where
else
to
store
like
all
of
that
data
for
the
dedicated
customers.
Yet
so
so
it
kind
of
ties
in
with
that
conversation
a
little
bit
as
well.
B
B
A
But
you
know
you
know
my
opinion,
so
I
I
mentioned
to
Andreas
that
having
a
single
file,
that
is,
the
input
to
timeline,
is
I
like
this
ID
and
then
we
can
add,
as
you
can
see
in
the
example
here,
we
can
add
these
overrides
and
use
those
to
place
annotations
on
a
graph
and
so
on.
Those
will
be
different
for
whoever
or
whatever
is
using
timeline
if
it's
dedicated
or
if
it's
another
customer
so.
D
B
A
Thing
one
thing
to
maybe
call
out
specifically
is:
we
already
have
things
that
dedicated
reuses
in
the
runbooks,
and
it
would
be
nice
to
be
able
to
keep
that
separation.
So
I
showed
you
the
the
gitlab
metrics
config
thing
that
is
overridden
in
both
so
because
we
use
jsonnet,
it
would
be
nice
if
we
can
use
gitlab
metrics
config
to
get
like
the
specifics,
because
then
potentially
dedicated
could
use
the
same
kind
of
file
that
is
from
their
own
metrics
config.
Does
that
make
sense
yeah.
B
All
right
and
then
what
I
was
playing
around
with
going
further,
was
to
introduce
a
kind
of
a
data
model
in
timeline
for
those
for
that
input.
So
today
we
basically
more
or
less
passing
around
dictionaries,
which
is
fine
like
it
works,
but
the
more
complexity
I
think
we
have
in
those
overrides
and
where
things
are
coming
from
I
think
the
more
we
will
benefit
from
adding
adding
a
data
model
and
I
I
played
around
with
that
a
bit
today
and
it.
B
It
is
just
really
just
making
things
a
bit
more
explicit,
like
adding
adding
data
classes
and
and
sort
of
representations
for
those
for
that
information.
So
we
can
parse
it
from
the
Json.
We
can
be
very
explicit
about
where
those
overrides
are
coming
from
and
sort
of
produce
a
result.
It
gets
more
testable,
I
think
in
the
end,
and
we
get
the
benefit
of
passing
around
those
components
in
in
the
forecasting
code.
D
Yeah,
that's
I,
I
agree.
Another
thing
to
look
at
it
might
not
work
in
in
your
case,
but
it
works
very
well
for
dedicated
because
we
have
this
thing
called
a
tenant
model,
which
is
a
Big
Blob
of
Jason
that
describes
like
everything
about
a
tenant
we
use,
Json
schema
so
and,
and
that's
basically
our
first
level
of
validation
so
that
you
know
basically
everything
in
there
describes.
D
You
know
this
is
an
integer
it's
between
this
and
that
what
we
found
is
that
Jason
scheme
is
very
expressive,
and
it's
really
nice,
because
when
you
start
typing
up
Json
you
get
all
the
autocomplete
and
everything
like
that
kind
of
for
free
I
mean
with
python
it's
not
as
bad,
but
what
we
found
was
we
were
using
jsonnet
to
validate
the
Json
and
Json
is
actually
kind
of
uniquely
awful
at
doing
data
validation
for
some
reason,
so
so
having
that
layer
of
of
Json
schemer
in
front
really
helped
us
but
use
it
don't
use
it.
B
Cool
and
then
I
thought
a
bit
about
the
sort
of
how
temland
processes
all
of
that.
So
today,
what
we
have
is
a
sort
of
jupyter
notebook
and
we
we
generate
those
notebooks
based
on
a
few
templates
that
we
have
and
then
we
execute
the
jupyter
notebook
and
get
back.
Those
four
costs
in
a
sense,
I
think
we're
kind
of
mixing
the
forecast
generation
so
using
profit
and
all
those
inputs
and
getting
the
data
and
producing
the
forecast.
B
We're
mixing
that
a
bit
with
how
we
want
the
output
to
look
like
right,
so
we'll
be
we'll
be
later
on
deployed
to
pages
is
basically
the
the
output
and
that's
that's
coming
from
The
jupyter,
Notebook
and
I
was
playing
around
with
that
a
bit
today
and
I
thought.
Perhaps
it
would
make
sense
to
split
it
up
in
different
parts,
so
we
can
decouple
that
so
what
I
looked
at
was
basically.
B
First
of
all,
you
know
just
plain
python:
just
generate
a
generate
all
the
forecasts
and
you
end
up
with
let's
say
a
data
directionally
dictionary
with
all
the
all
the
forecasts,
and
you
know,
there's
familiar
looking
graphs
and
maybe
some
yaml
files
or
Json
or
whatever
that
contains
all
the
the
results
of
the
forecast
and
then
based
on
that.
As
a
second
step
you
can
do,
you
can
generate
output
for
gitler
Pages.
However,
that
works.
B
You
can
manage
the
capacity
planning
issues
and
all
that
so
kind
of
decoupling,
not
making
making
that
easier
to
work
with.
Unless
we
have
a
real
desire
to
use
Jupiter
notebooks,
which
I
wasn't
sure
about
it.
It
was.
D
I
I
can
actually
tell
you
the
historical
reason
why
it
was
based
on
jupyter.
Notebooks
was
because
I
got
this
really
weird
okr
to
show
that
the
gitlab
had
this
very
brief
time
where
we
had
this
integration
with
jopina
notebooks
in
the
application,
which
wasn't
really
an
integration
and
I
had
a
weird
Okay
art
to
show
that
it
worked,
and
so
I
kind
of
twisted
around
that
okr
to
to
start
building
timeline.
D
So
it
was
kind
of
like
the
Horse
lead
or
the
cart
leading
the
horse
or
whatever,
and
it
was
kind
of
to
show
that
that
the
integration
worked,
but
also
it
kind
of
helped
us
bootstrap
it,
because
it
was
a
lot
less
work.
But
like
there's,
no
reason
to
keep
that.
B
B
Do
that
cool
and
I
ended
up
so
I
got
really
excited
about
this
today,
so
I
ended
up
playing
around
with
with
markdown
templating
and
what
we
could
generate
from
from
that
information.
So
remember
that
we
have
sort
of
generated
all
the
forecast
information
separately
and
then
we
could
perhaps
consider
plotting
or
generating
pages
in
a
different
way
and
what
I
ended
up
doing
was
using
make
docs
for
that.
It's
just
you
know
simple
markdown
framework
and
you.
B
Yeah
yeah
there's
a
bit
of
structure
there.
That's
true,
you
can
do
you.
B
B
We
had
sometimes
a
need
to
look
at
sort
of
a
or
in
the
debug
way.
Look
at
the
forecast
with
change
points
trend
line.
You
can
add
that
there
and
we
could
even
like
have
you
know
the
events
that
we
captured,
so
knowing
that
something
happened
on
a
particular
date
is
something
we
could
describe
and
we
could
even
like
I,
don't
know,
scrape
scrape
the
issue
tracker
and
find
all
the
capacity
planning
issues
related
to
that
component
that
we
know
that
we
had
in
the
past.
B
B
A
I,
like
that,
you
can
print
like
that.
We
still
have
access
to
all
of
this
information.
You
can
show
it
on
a
single
page,
but,
for
example,
the
issues
feels
like
something
yeah.
B
A
So
so
we
have
the
tool
timeline
that
is
from
a
different
Repository
timeline:
Library,
let's
call
it
and
then
or
timeland
what
is
currently
the
timeline
repository
that
puts
issues
in
capacity
planning
and
hosts.
This
Pages
site
uses
timeline
library,
and
then
it
has
a
tool
after
timeline
library
that
generates
a
Pages
site,
Maybe
I'm,
just
brainstorming,
just
don't
think
anything.
Regreted.
B
C
I,
don't
know
that
I
can
verbalize
it
as
well
as
what's
written
down
by
what
Andrew
added
there.
So
I
think
it
covers
everything
on
the
Chain
of
Thought,
like
I
joined
a
few
minutes
late
there.
So
I
initially
heard
you
all
talking
about
generating
things
in
SLO
and
I.
Remember
Nick,
mentioning
sloth!
So
I
was
like
I'll
just
note
it,
but
I,
don't
I,
don't
know,
there's
much
else
to
say
other
than
let's
type
there.
So
because.
C
I
just
was
out
of
context,
so
unless
Bob
or
Andrew
you
want
to
add
anything,
I,
just
I
think
what
you
typed
is
good.
So
it's.
D
D
Amount
of
work
to
kind
of
bring
back
the
same
level
of
of
of
stuff
that
we
have
I
mean
it's
not
impossible,
but
it
would
be
big.
D
It's
like
all
the
it's
all
the
extra
stuff
like
you
know,
the
the
tooling.
You
know
the
the
kind
of
the
the
tooling
links
in
the
dashboards
right
and,
like
obviously,
the
alerting
and
the
error
budgeting
side
because
they're,
you
know,
and
the
aggregation
of
the
area
budgets
right
all
of
that
stuff
and
the
aggregation
sets
and
all
of
that,
like
it's
there's,
there's
a
there's,
a
it's,
a
very
long
tail
of
stuff.
C
That
makes
sense
cool
but
yeah,
trying
to
join
with
my
time
of
day,
I
think
if
we're
okay,
I'll
move
to
other
things.
Just
brief.
Observability
updates
we're
looking
to
do
the
elastic
upgrade
for
the
prod
logs
cluster
this
weekend,
I'm
working
through
the
change
issue.
Thank
you,
Stephanie
I,
don't
think
you,
you
even
had
a
chance
to
look
at
it,
but
Steve
added
a
lot
of
comments
already
for
what
we're
trying
to
do.
But
it
turns
out
the
In-Place
upgrade
may
be
the
easier
thing
to
do
so.
C
We're
ready
to
try
to
click
the
button
this
weekend,
but
just
trying
to
check
all
the
the
boxes
on
it
and
make
sure
we're
set,
but
it
it
helps
us
unblock
if
we
can
get
to
elastic
eight
seven,
and
we
can
do
some
other
things
with
the
hardware
on
that
cluster
to
deal
with
some
of
the
saturation
we've
been
seeing
and
then
Nyx
one
part
of
what
he's
trying
to
do
to
wrap
up
the
quarter
was
to
add
proper
oauth
2
proxy
for
credentials
for
remote
right,
because
right
now
we're
just
doing
basic
auth
for
what
we
did
for
code
suggestions,
but
to
make
remote
right
ready
for
everybody.
C
We
want
to
add
the
oauth
2
proxy
stuff,
and
what
that,
what's
in
that
epic,
so
he'll
be
working
on
that,
but
then
other
than
that.
Just
noting
that
he's
also
totally
ready
to
help
you
Stephanie,
as
we
start
to
look
at
some
moving
some
of
the
rule
stuff
to
Thanos,
because
that's
a
big
step
for
us
in
making
Prometheus
more
efficient.
C
The
Prometheus
nodes
in
prod
are
as
big
as
the
database
nodes
for
petroni,
and
we
need
to
do
something
to
make
that
better
and-
and
so
much
of
that
is
because
of
the
high
cardinality
metrics
and
the
rule
processing.
So
whatever
we
do,
there
will
help
us
both
save
money
and
make
it
easier
to
deal
with
rule
processing
in
the
future.
So.
A
A
A
Do
we
need
to
I
think
maybe
I
don't
know
if
this
has
been
discussed
with
Nick,
but
do
how
do
we
see
the
the
scaling
for
for
Thanos
ruler?
Is
that
going
to
be
like
functional,
sharding
or.
C
Is
starting
Nick
has
more
of
it
and
inside
and
I'm
working
on
trying
to
extract
that,
so
I
can
get
back
to
Downing
to
you
guys,
but
yes,
it
can
be
sharding
per
tenant.
It
can
be
sharding.
If
we
enable
the
rulers
to
actually
even
also
use
remote
right,
then
we
can
really
horizontally
Shard,
because
now
I'm
writing
my
metrics
out
and
I.
Don't
have
to
have
the
recording
rules
on
the
same
disk
as
where
the
ruler
is
running.
F
This
is
a
note
for
those
who
aren't
Bob
and
I
didn't
talk
to
yesterday.
I
am
trying
to
come
up
with
like
to
go
around
and
get
all
the
information
out
of
people's
heads
before
they
go
on
vacation,
which
I
have
done
some
of
with
Bob
I'm,
also
meeting
with
Nick
on
Monday,
so
we're
gonna
come
up
with
a
plan.
It's
gonna
be
a
great
plan
like
the
from
the
side
of
scalability.
F
C
F
F
F
Plan
is
just
get
Nick
and
I
and
Matt
to
come
up
with
what
makes
sense
to
go
forward
and
execute
it.
F
C
C
This
is
both
an
efficiency
and
reliability
of
the
observability
stock
Improvement
that
needs
to
be
made
so
because,
having
you
all
have
to
have
a
monthly
we're
sorry,
there's
a
gap
in
the
metrics
for
your
air
budget
supports
like
that's,
not
something
that's
going
to
be
sustainable,
going
forward.
So
yeah.
F
And
I
know
that
it's
mentioned
as
a
okr
for
scalability
as
well.
So
it's
going
to
be
a
joint,
or
at
least
in
one
of
the
things
I
haven't
looked
at
all
the
way
through
all
of
my
to-do's
there's
notification
about
okay,
our
discussions
for
scalability
about
with
this
one
being
on
it
so
yeah.
So
I!
Guess!
F
If
anyone
has
any
questions
or
wants
to
follow
up,
there's
a
big
epic
that
I'm
working
on
refining
that
I
can
actually
Link
in
this,
but
I
predict
the
high
level
is
going
to
be
that
we
end
up
moving
all
of
the
rules
to
Thanos
ruler
and
do
something
else
with
this.
But
I
need
to
make.
F
Correct,
that's
not
really
the
observability
part,
that's
just
the
we
need
our
metrics
like.
We
need
our
metrics
to
be
accurate,
and
that
is
the
other
part
of
it.
C
D
C
D
That
he's
joining
just
going
back
to
the
the
oauth
question,
just
just
on
that,
like
I,
think
the
thing
that
we
really
really
would
be
really
nice
from
the
outset
is
if
we
can
use
like
kind
of
oh
ideas.
Oh,
you
know
oidc
style,
trust
relationships
with
short-term
credentials
to
authenticate
in
and
like
really
kind
of
on
dedicated
we're
going
to
be
using
this
remote
writing
interface.
Quite
a
lot
and
then
I
want
us
to
manage
any
type
of
like
long-term
Keys
like
I
want
to
be
able
to
to.
D
You
know
like
all
the
stuff
on
dedicated
now
we're
getting
rid
of
all
the
the
long-lived
keys
for
everything
and
there's
no
leaks.
It's
everything
is
better,
and
this
would
be
one
of
those
things
would
be
really
nice.
If,
from
the
outset,
we
can
use
like
short-term
keys-
and
just
you
know,
get
a
key
for
for
short
term
and
use
that
key
for
the
remote
right
and
not
have
to
worry
about
keeping
Service
keys,
or
you
know
Json
blobs
that
are
really
top
secret
anywhere.
C
No-
and
that
was
the
whole
reason
we
wanted
to
make
sure
we
finalized,
because
right
now
it's
just
a
basic
auth,
because
that
was
like.
We
need
to
get
this
done
code
suggestions
in
May
and
we're
like
okay
basic
off.
It
is
yeah,
but
we
were
circling
back
so
I
just
linked.
It's
the
oauth
2
proxy
thing
that
I
just
linked.
That's
what
we're
looking
at
using
I'll
make
sure
I
message
that
I
can
try
to
get
that
back
to
Nick.
So
whatever
we
do,
there
I.
D
I
my
experience
with
o
wolf
is
kind
of
its
its
user
to
to-
and
this
is
just
maybe
my
experience
but
use
it
to
to
service
right
where,
where
a
lot
of
oidc
is
like
service
to
service
with
short-term
credentials
and
trust
relationships,
but
I'm
presuming,
you
know
next
I'm
sure
that
that
what
he's
got
here
can
obviously
Support
Services
service
because
that's
like
19
well,
all
of
the
traffic
that's
going
to
go
through
remote
rights
is.
E
C
Yeah,
this
will
be
generating
it
for
a
service,
and
that
was
the
last
piece
of
what
we
were.
A
little
hung
up
on
and
waiting
to
sync,
with
foundations
on
is
one
of
the
things
before
we
thought
too
deeply
about
it.
That
is,
the
problem
is
like
we
wanted.
We
wanted
to
use
Vault
and
be
like
hey
everybody
use
Vault,
but
vault's
not
exposed
externally
for
those
things
in
AWS.
C
Yeah
yeah,
so
that's
why
we
were
like.
We
took
some
time
in
trying
to
evaluate
what's
in
that
epic
and
I'll
Circle
back
because
I
know,
Nick
was
meaning
to
add
a
note
to
the
Epic
that
I
linked
there
with
what
we
were
choosing
and
some
of
the
design
on
that.
So
let
me
make
sure
that
that
gets
added,
but
that
was
one
of
those
things
like
we
would
in
trying
to
wrap
up
this
quarter.
Just
letting
you
know
what
we
were
having
people
focus
on
so
Andrew.
D
Won't
be
all
it'll
be
like
it
won't
be
all
the
we're
not
going
to
like
write
all
of
the
metrics
from
the
from
the
instance
we're
going
to
kind
of
prob,
maybe
utilization,
saturation
and
and
kind
of
headlines
and
send
those
somewhere.
But
you
know
not
everything,
but
basically
a
subset.
It's
not
going
to
be
the
whole
Prometheus.
D
It's
like
literally
a
set
of
of
of
recording
rules,
probably
I
mean
even
then
I
kind
of
have
concerns
around
like
the
the
the
cardinality
yeah,
because
I
mean
you
know
when
we
get
to
100
customers,
and
we
have
a
hundred
IDs
in
in
those.
Is
that
going
to
be
something
we
can
do
so
you
know
there
is.
D
C
Well
again,
part
of
it
that's
part
of
what
the
flexibility
of
my
remote
right
gives
us
is
because,
if
we
decide
to
implement
different
Thanos
back
ends
underneath
that
yeah,
you
know
this
is
where
we
can
do
a
bit
of
a
magician
behind
the
scenes
of
actually
there's
three
different
thanos's.
That
remote
right
is
accepting
and
writing
to.
C
D
Have
we
got
any
time
left
in
this
demo?
Because
if
you
do
we're
just
talking
about
oauth
and
I
just
wanted
to
maybe
show
this
little
thing
that
that
are
built
for
dedicated
but
I.
C
D
Let's
do
it,
it
was
kind
of
ego
and
I
were
talking
about
Runway
and
authenticating
users
against
a
git
lab
instance.
Yesterday
and
I
suddenly
realized
that
there's
a
bunch
of
stuff
on
on
dedicated,
where
we
want
to
talk
to
a
gitlab
instance,
and
you
have
to
have
a
gitlab
token
set
up.
So
that
means
creating
a
Long
Live
token,
which
is
kind
of
like
the
enemy
of
everything.
D
D
Done
it's
a
single
there's,
a
single
application
for
it.
So
so
so
we
use
this
thing
called
pmv
I'm
just
going
to
use,
because
I
have
to
look
up
this
one
here
and
it's
kind
of
like
a
tool
for
kind
of
on
your
computer,
getting
all
the
secrets
without
like
having
them
written
in
files,
which
is
very
important
for
dedicated,
because
we
don't
want
people
accidentally
leaking
things.
D
But
what's
kind
of
useful
is
like
if
I
need
a
gitlab
token
now,
I
don't
have
to
go
and
create
one
under
my
accounts
and
then
you
know
export
it,
and
you
know
then
it's
in
my
bash
history
and
you
know
who
else
knows
where
I
can
just
do
this
I'm
just
going
to
change
this
to
that,
and
what
this
is
going
to
do
is
whoops
it's
too
quick.
So
let
me
try
that
again.
D
I
need
to
bring
this
whole
screen
over
here.
Let
me
do
this,
so
I
just
run
that
and
it
goes
it
logs
into
gitlab.
If
I'm
not
logged
into
gitlab
I'll
have
to
do
an
OCTA
oauth
and
then
you
know,
I've
got
a
if
I
do
like
a
curl
at
this
point
with
the
gitlab
token
that
will
now
work
so
I've
got
in
my
environments.
I
now
have
a
gitlab
token
that
was
exported
by
pmv
via
that
oauth
process.
D
So
I
can
just
do
that,
and
it's
me
so
and
that
token
will
last
for
two
hours
and
then
we
can
throw
it
away.
So
if
you
find
that
you
often
need
gitlab
tokens
go
and
look
at
pmv,
and
it's
got
this
thing
called
gitlab
in.
It's
also
got
like
useful
tools
for
fetching
things
from
one
password
and
injecting
them
into
into
your
environments,
but
that
that
might
be
useful
if
you're
looking
at
using
or
using
tokens
with
gitlab-
and
you
don't
want
to
stick
them
on
your
machine.
E
D
D
I
might
have
been
an
issue
about
that.
I've
got
several
open
issues
on
glab
at
the
moment:
cool
thanks,
sequel.
D
So
it's
just
as
which
the
the
application
is
set,
so
the
Callback
is
localhost
and
it
just
creates
a
server
while
it's
running
and
then
when
gitlab
finishes,
its
authentication
jumps
back
to
localhost
gets
the
code,
does
the
exchange
and
then
exports
the
token
to
the
environment
through
the
you
know
they
well
over
here.
So
I.
D
So
so,
when
it
works
at
the
moment,
I'll
just
show
you
the
the
part
that
doesn't
fit
into
glab.
Is
that
it's
at
the
moment
it's
a.
D
A
D
So
if
you
go
look
in
here,
sorry,
so
there's
this
client,
this
application
that
we
Define-
and
you
know
it's
the
secret-
we
don't
actually
use
because
we're
using
a
this
newer
kind
of
oauth
that
doesn't
require
the
secret,
but
we've
got
the
application
ID
and
this
is
defined
once
off,
and
so,
if
we
did
that
I
think
what
we'd
want
to
do
is
allow
glad
to
pass
in
the
application
ID,
rather
than
have
it
hard
coded
like
it
is
like
it
is
over
here,
because
then
you
know,
the
user
of
glab
would
need
to
create
an
application
for
oauth
and
then
pass
that
identifier
and
we
at
the
moment
it's
just
static.
A
Does
glab
have
a
thing
and
not
not
glad,
but
pmv
could
have
a
thing
that
it
can
Parts
like
an
environment
file
where
we
say
I
need
a
gitlab
token.
I
need
a
this
token.
This
token,
this
token
and
then
like,
like
really
often
have
a
DOT
and
sh
that
has
some
one
password
stuff
inside
and
some
yeah,
but.
D
So
so
pmv
does
it
a
slightly
different
way
where
it
creates
a
one
password
entry,
that's
got
like
end
colon,
like
the
fields
have
got
the
environment
name
of
the
thing
and
then,
when
you
export
that
thing
it
just
generates
and
it's
got
a
way
of
capturing
it
in
that
way.
So
you
can
capture
keys
in
that
way
as
well,
but
it's
slightly
different
because
one
password
built
in
has
got
the
what
you're
describing
as
well
the.
A
A
D
Yeah,
so
yeah
I
mean
we
could
I.
Could
I
could
put
that
in
there
as
well,
like
the
other
thing
that
it
that
it's
the
reason
why
we
can't
just
use
the
one
password
CLI
is
we
we
do
stuff
like
we
go
to
the
AWS
SSO
for,
like
you
know,
logging
into
AWS,
and
you
know,
obviously
one
password
still
has
like
great,
but
it's
a
one-trick
pony.