►
From YouTube: Verify:CI - Technical Discussion - 2020-05-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
thanks
so
today,
I
wanted
to
tell
you
a
little
bit
more
about
Prometheus
and
what
promote
use
is
and
how
to
use
that,
and
it's
going
to
be
a
little
more
technical,
I
also
plan
to
make
like
live
demo
and
that's
interesting.
I
really
do
have
live
demos
because
there
are
so
many
things
that
can
go
wrong
and
usually
something
explodes,
and
you
know
it's
fun
so
yeah.
A
A
A
A
And
that's
get
up
leaves
and
we
can
basically
open
the
rebels
console
in
the
moment.
So,
as
you
can
see,
github
is
up
and
running
and
now
how
proteins
actually
works
so
Prometheus
is
a
separate
application
running
on
a
different
server
in
a
different
container
or
different
virtual
machine.
It's
going
to
iterate
all
the
endpoints
in
its
configuration
and
scrape
some
magic.
So
what
does
it
mean
to
scrape
a
metric?
It
means
that
an
application
like
github
or
a
subcomponent
exposes
some
metrics
and
then
from
it
use
periodically.
A
It's
a
on
configured
interval
is
going
to
visit
this
endpoint
see
what
metrics
are
there?
It's
going
to
just
copy
them
into
its
database,
its
memory
and
keep
it
in
keep
these
metrics
and
measurements
there
so
that
we
can
later
do
some
operations
on
them.
So
in
case
of
get
up,
the
metrics
endpoint
is
exactly
here
and,
as
you
can
see,
we
have
a
bunch
of
tech
space,
metrics
record
recorded
here,
so
we
do
have
a
lot
of
them
already,
but
doing
this,
like
demo,
I
hope
that
we
are
going
to
that.
A
So
in
there
do
you
see
Kalyan,
GDK,
I'm,
not
completely
sure.
This
is
something
that
works
by
default
in
ddk,
but
in
dedicate
I'm
sure
that
you
can
also
run
from
Hughes
and
open
what
you
buy
for
it
so
in
in
place
of
gck,
it's
expose
or
1990,
and
this
is
primitive
use.
Ui
that
allows
you
to
enter
some
expression
and
the
expressions
are
in
the
format
of
from
ql.
So
this
is
like
SQL,
but
for
use,
and
we
can.
A
We
can
actually
good
generate
some
graphs
and
get
some
raw
data
here,
based
on
the
matrix
from
q
scrapes
from
get
up
itself
so
actually
in
production.
We
also
do
have
something
like
that,
but
because
of
obvious
reasons,
it's
not
going
to
be
available
for
everyone,
because
it
contains
data,
but
it's
it's
here
under
the
load,
balancer
or
some
kind
of
security
configuration,
but
it's
also
in
in
this
place.
So,
okay.
So
let
me
tell
you
a
bit
more
about
what
can
we
measure?
A
A
A
A
A
As
you
can
see,
there
are
a
lot
of
them
here,
but
we
want
to
create
a
new
metric
and
it's
going
to
be
called
get
labs,
yeah
demo
counter.
As
you
can
see,
it's
not
there.
Something
like
that
does
not
exist
yet.
So,
let's
create
let's
create
this
culture.
So
it's
in
Ruby
code
look
like
skip
the
matrix
counter
counter
name
get
lab
CI.
A
So
metric
is
the
counter
is
defined
and
the
counter
is
very
simple:
we
can
just
increment
the
counter
and
there
is
nothing
more
we
can
do
in
the
counter.
So
that's
how
come
to
arrive.
So,
let's
increment
the
counter
counter
increment,
as
you
can
see,
the
counter
volume
current
is
1
since
you
increment
it
again,
values
to
the
values
read
about
useful,
ok,.
C
A
Let's
see
if
this
is
actually
okay,
so
it
appears
that
the
metric
has
appeared
the
metric
standpoint
and
it's
very
simple.
It's
defined
by
some
annotations
and
the
volume,
and
we
do
have
some
information
about
what
type
of
country
it
is
and
that
it's
basically
a
counter
and
the
value
that
has
been
recorded
in
the
console.
So
in
this
case,
if
we
modified
the
counter
in
the
console,
but
in
the
real
application
we
would
have
some
Ruby
code
actually
to
the
incrementation.
A
Whenever
something
happens,
for
example,
we
can
have
my
counter
for
the
amount
of
pipelines
created
on
github.com.
We
can
have
a
counter
for
some
actions
that
the
user
did
and
populated
the
parking
so
that
we
can
actually
wake
up
the
counter.
And
now,
let's
actually
see
how
to
look
la
looks
like
in
the.
A
Can
you
see
the
the
counter,
so
it
appears
that
the
value
has
come
from
almost
zero
to
four
and
four
and
up
top
is
okay.
So
we
actually
incremented
in
between
scrape
ink
and
that's
very
interesting
because
we
recorded
the
value
to
them
from
hugh's
connected
to
the
matrix,
endpoint
scraped
the
number,
and
then
we
incremented
it
again
twice
and
the
tomatillos
noticed
that,
because
it
scrapes
the
metric
standpoint
and
also
because
the
time
at
which
the
observation
has
been
made
right.
A
A
A
Perhaps
okay,
so
that
was
the
problem
we
had
time
set
explicitly
on
which
we
are
going
to
show
time.
So,
interestingly,
as
you
can
see,
we
can
also
see
that
it
has
been
incremented
to
five.
It
means
that
we
called
the
observation
when
we
actually
increment
a
metric,
not
when
it's
being
scraped
in
here,
and
so
in
this
case
it's
going
to
go
on
the
up,
but
that's
not
3d
fun.
So,
let's,
let's
do
something
wrong
with
it.
A
A
A
A
Created
the
second
counter
right,
so
what
we
have
done
before
it's
not
like
doesn't
matter,
because
it's
a
different
culture
doesn't
make
sense.
Okay,
so
that
simple!
So
so
we
should
already
start
seeing
some.
A
A
So
the
volume
currently
is
20
should
wait.
Okay,
you
can
say
it's
okay,
so
it's
been
done
now.
We
sometimes
need
to
wait
for
the
scraping,
because
prompt
use
is
connecting
to
the
metrics
and
point
on
predefined
interval.
So
it's
120.
So
as
you
can
see,
this
only
going
to
rise
and
that's
like
not
something
that
we
are
interested
in.
We
want
to
see
the
write
on
which
it's
being
cried
incremented.
A
So
this
is
a
very
interesting
function
and
its
contract
is
one
minute,
so
we
are
taking
the
whole
minute
of
observations
and
we
are
going
to
calculate
the
rate
in
which
the
counter
has
been
incremented
during
that
time.
So
this
is
interesting
because
we
were
incrementing
it
for
exactly
that
minute
and
who
is
going
to
tell
me
what's
the
rate
of
the
incrementing
of
this
counter
in
the
timespan
of
this
mint?
This
is
also
not
super
complex
question,
but
more
complex
than
this
one
I.
A
What's
the
value?
What's
the
value
of
the
incrementation
rate,
the
perceptions
to
per
seconds
yeah?
So
I
think
that's
that's
exactly
so!
Then,
let's
see
if
it
actually
works
and
as
you
can
see,
it's
abused
that
that's
exactly
this
right.
So
that's
2
per
second
and
like
you
know
it,
it
depends
exactly
where
the
scraping
has
been
done
and
we
are
seeing
here,
but
you
can
see
the
maximum
rate
of
chromatic
these
countries
too
so
yeah
it
works.
A
So
that's,
basically
what
the
public
use
counter
is,
and
there
is
like
a
lot
of
different
things
we
can
do
in
the
prompt.
Well,
we
can
have
many
different
counters
and
we
can
actually
lose
them.
Then
we
can
aggregate
the
countries
we
can
define
the
alerts
based
on
those
values
and,
for
example,
other
CI
channel
ones
like
when
something
seems
suspicious
and
now
I
wanted
to
tell
you
just
a
little
bit
more
about
histograms,
which
are
way
more
context
encounters.
But
the
idea
of
how
it
works
is
basically
the
same
done
before
I
proceed.
E
E
So
if
you
didn't
define
the
counter,
if
you
reuse
the
same
catalepsy
I
demo
counter
under
a
different
type,
let's
say
that
is
not
a
counter.
Is
it
there's
the
Prometheus
recognize
that
is
a
different
type
and
you
can
still
use
the
same
name
on
the
different
type
or
just
the
name
is
unique
for
the
metric.
That's.
A
A
very
good
question
and
I
have
no
idea,
so
we
can,
we
can
check
it.
We
will
see
what
I'm
going
to
happen.
So,
let's
define
something
else
which
is
a
gouge.
A
gouge
is
a
little
different
things
on
the
counter,
because
the
counter
can
only
go
up
and
the
count
can
also
go
down
because
a
gouge
is,
you
know,
just
the
value
being
observed
at
the
moment
that
prompt
use
the
scraping.
A
A
A
So
that's
interesting,
so
it
appears
that
the
couch
couch
is
actually
defined
as
a
counter,
so
perhaps
will
be
countless
actually
detected.
Something
like
this
I
exist
and,
as
you
can
see,
it
returns
the
counter
we
have
defined
previously
instead
of
the
couch,
because
the
description
is
different
right,
so
creating
a
new
couch
actually
return
the
counter.
So
that's
some
probably
the
answer
is
it's:
it's
not
possible
to
use
and
that
doesn't
explain
this
thing
from
you.
Yeah.
E
Counters
gauge
and
everything
so
could
it
be
that
in
our
test,
like
we
here,
like
you,
what
you
just
did
like
you,
create
like
a
gauge
sort
of
metric
and
in
the
test
works
fine,
but
when
the
counter
is
initialized
first,
then
the
gauge
then
would
break
on
production.
What
do
we
have
some
weather?
We
have
like
a
techniques
like
in
JDK,
where
you
know
it's
safe
to
introduce
a
counter,
and
actually
we
can
say
we
can
know
immediately
what
it
is:
a
duplicate
or
not.
What
is
conflict
with
something
else
so.
A
I
basically
think
that,
as
long
as
you
are
reusing
the
type,
so
you
are
creating
a
new
counter
and
it's
still
a
counter,
it
might
work
in
case
like
this,
like
it's,
it's
clear
that
do
not
have
an
undefined
method
right,
because
we
are
drinking.
How
do
you
pronounce
that
in
English?
No
I
think
it's
gauge.
A
A
A
Unexpected
behavior
and
in
my
opinion,
it's
actually
back
something
like
this
to
happen.
This
method
would
probably
me
to
write
an
error,
an
exception,
so
this
is
I
think
we
could
probably
improve
to
make
it
more,
like
you
know,
less
error
problem,
but
let's
proceed
to
histograms,
because
this
is
histograms
are
way
more
interesting,
because
histograms
only
to
measure
duration
of
something
how
it
changed
changes
in
time
and
then
you
can
actually
get
some
really
nice
data
from
that.
A
So
I'm
going
to
just
create
a
keister
you
to
show
you
how
it
looks
like
in
the
metric
standpoint
and
then
we're
going
to
go
to
our
production
instance
of
22
years
and
I'm
going
to
show
you
the
histogram
of
pipeline
creation,
duration
that
we
has.
We
have
introduced
like
at
weeks
ago,
a
team,
okay,
so
yeah,
let's
screen
critics
and
we
are
going
to
use
the
creation
pipeline
creation
ratio.
A
Okay,
so
so
that's
interesting,
so
let
me
explain
little
two
more.
So,
of
course
it
defines
a
histogram
and
it
defines
all
the
buckets
histograms
counter
count
and
histograms.
So
some
so
the
count
is
obviously
the
amount
of
observations
we
have
done
and
we
get
free.
So
the
count
is
free.
The
Sun
is
all
the
observations
sound
together
and
we
observe
that
in
twenty
two
and
two
six
and
20
min
gives
us
28,
and
then
we
have
all
these
buckets.
A
So
our
first
observation
was
two
and
the
packet
is
going
to
tell
us
how
many
observations
we
have
done
with
a
value
that
is
equal
or
lower
to
the
bucket
itself.
So
there
were
zero
observation.
With
observations
will
value
one
right
and
the
next
bucket
was
5.
So
we
observed
two
and
six.
So
in
five
who
should
have
one
observation
and
that's
exactly
this,
then
we
have
our
next
bucket
is
10
and
we
observed
two
six-and-twenty
son
in
the
packet
ten.
A
We
have
two
observations
and
that's
that's
true,
and
then
we
have
a
bucket
100
and
in
the
bucket
100
we
observed
to
observed
six
and
twenty.
So
we
have
three
observations
in
the
bucket
100
and
there
is
this
additional
pocket,
so
everything
that
goes
above
our
bucket
is
going
to
be
recorded
in
bucket
plus
infinity
and
based
on
this
day.
The
proteins
is
going
to
visit.
A
The
matrix
and
point
is
going
to
scrape
these
things,
and
it
will
expose
all
this
data
that
we
have
managed
to
collect
in
the
Prometheus
in
Y
and
using
actually
histograms
it's
much
more
difficult,
but
let's
go
so
today.
We
can
probably
check
the
data
that
we
added,
but
we
won't
see
much
because
we
will
have
many
observations.
Time
is
some
longer
period
of
time,
just
a
second.
A
A
A
Time
frame
to
one
day,
so
it
appears
that,
according
to
this
from
QL
query,
it
appears
that
the
average
time
that
we
can
see
on
this
graph
is
that
creating
a
pipeline
probably
takes
around
four
400
milliseconds.
This
is
in
seconds
and
we
have
some
occasional
spikes
up
to
one
second,
probably
but
Google
Docs.
A
And
let
me
check
it
that
squares
X,
so
that
that
query
seems
to
be
completely
fine.
So,
but
but
that's
you
know,
that's
an
average.
That's
right.
It
means
that
we
are
taking
data
from
five
minutes
and
that's
our
vector
and
we
are
calculating
the
average
rate
for
that
time
frame
and
it's
actually
like
we're
missing
something
here.
So
there
interesting,
which
is.
A
Okay,
so
it
looks
like
it
worked
and
we
have
some
really
interesting
insights.
It
appears
that
we
do
see
occasional
spikes
of
pipeline
creation
taking
more
than
80
18
seconds,
and
here
we
are
having
10
seconds.
We
are
having
13
seconds
here
and
12,
and
the
value
that
we
are
seeing
here
are
much
different
than
what
we
saw
in
the
case
of
average
rate,
and
it
means
that
99%
of
requests
in
a
given
time
took
less
than
18
seconds,
for
example,
in
this
case
right.
A
F
Have
I
have
one
just
so
that
I'm,
maybe
it's
a
stupid
question
but
I'm
thinking
of
like
how
this
is
used
currently,
so
we
we
basically
add
any
kind
of
day
that
we
find
interesting
and
Prometheus
will
basically
keep
take
all
that
data
and
remembers
it,
and
then
we
query
all
the
day
that
we
want
together,
and
that
gives
us
or
whatever
we're,
observing
right.
So
we
can
combine
all
of
these
different.
They
got
together.
Yes,.
E
C
A
There
and
of
course
you
can
locally
check
if
it
looks
like
it
should
so.
This
way
you
can,
you
know,
work
on
the
graph
as
long
as
it
actually
looks
correctly,
because
prompt
use
is
tricky,
it's
very
tricky,
but
it's
probably
the
best
tool
that
exists
right
now.
That
actually
makes
it
possible
to
record
such
a
huge
amount
of
data
that
we,
for
example,
generate
in
production
because
of
the
scraping
mechanism,
and
that
basically,
we
have
you
know
those
smart,
buckets
and
counters
that
are
very
small
amount
of
data
compared
to
what
you
would
have.
A
C
Hey
Cheryl
we're
getting
robot
voice,
everyone
else
getting
that
yeah.
Yes,
you
might
be
a
Bluetooth
battery
or
something
like
that.
It's
getting
Cara
Lee
here
you
know.
G
You
might
hear
some
screaming
in
the
back.
I
was
just
saying
that
I
wanted
to
make
this
meeting
I
just
want
to
give
a
quick
intro
for
this
meeting.
Just
because
there's
our
first
one
together
around
talking
about
Tech
Deck,
the
other
thing
I
was
going
to
mention
was
I.
The
thing
I
forgot
to
do
is
maybe
make
this
meeting
optional
in
future.
So
attendance
is
optional
for
folks
and
just
a
couple
things
just
to
mention.
G
Was
there
just
agendas
meant
to
be
crowd-sourced,
and
you
know,
and
if
we've
run
out
of
topics,
we
could
definitely
go
to
our
CI
tech
board.
If
we
wanted
to
and
I
just
wanted
to
call
out
the
changes
that
we
made
to
the
team
and
Paige
doing,
we
talked
at
how
we're
applying
labels.
So
that's
a
good
read
through
if
you
haven't
had
a
chance,
so
yeah
family.
You
want
to
jump
to
your
first
point
there.
Let
me.
E
Yep,
so
so
I
just
wanted
to
mention
that
the
well
very
briefly,
so
that
there
is
some
kind
of
work.
I
have
been
doing
like
a
spare
time
about
refactoring
CI
minutes,
and
so
basically
so
this
is
kind
of
just
to
make
everybody
aware
that
there
is
some
change
data
that
we
noticed
where
the
logic
has
been.
E
You
know
raising
some
some
high-profile
bugs
a
few
times,
especially
when
it's
about
resetting
10
minutes
every
month
and
and
that
has
caused
us
to
be
to
jump
in
the
call
and
and
actually
investigate
these
priority
bugs
so
so
after
a
few
times
are
actually
done.
That
I
realize
this
is
a
little
under
anti-patterns
there
and
and
logic
scatter
across
different
the
code
base.
E
So
we
now
we
are
moving
slowly
towards
a
central
location
from
where
all
this
logic
is
so
I
just
want
to
point
out
very
briefly
that
right
now,
because
we
are
right
in
the
middle
of
the
refactoring
and
it
state
takes
a
while
to
finish
this
refractory
and
there
might
be
cases
where
some
of
the
logic
is
in
the
original
place,
and
some
of
the
logic
is
in
the
new
place.
So
this
could
be
something
especially
for
the
backend
engineers
to
keep
an
eye
on
you.
E
So
there's
this
issue
I
just
put
in
the
agenda
because
I
think
it's
one
of
the
technical
that
items
they're
most
pressing
and
if
we
agree
with
that
or
you
don't
agree,
I
don't
know
just
to
bring
this
up
the
discussion.
But
from
conversation
we
were
having
with
trekkers
in
the
mail
and
there
can
be
a
different
approach
to
solve
this
problem
and
they
might
require
to
do
different
sort
of
proof
of
concepts
and
to
see
how
do
we
want
this
code
to
look
like
I?
E
Yeah
so
yeah
they
taking
about
because
they
see
consistency
and
I
think
the
part
of
technical.
That
is
more
the
kind
of
inconsistency
that
we
have
realize
over
time
and
by
trying
to
fix
this
bug,
we
are
actually
adding
some
technical
debt
purposely,
and
so
that
could
be
probably
something
we
would
like
to,
but
I
think
we
could
be
something
we
want
to
tackle
soon.
I,
don't
know
if
Mark
for
13.1,
so
this
could
be
Monday
one
of
the
issues
we
can
Technol
that
issue
we
want
to
do
for
the
next
month.
A
The
dark
processing
is
kind
of
part
of
the
core
CI
CDE
platform.
You
know
because
it's
pipeline
processing
pipelines
Titus's.
This
is
something
that
needs
to
work,
because
users
depend
on
that,
so
that
we
know
what
they
defined
in
the
configuration
behaves
reasonably
well
and
the
pipeline
is
being
processed
and,
in
my
opinion,
it's
it's
important,
and
today
some
of
you
probably
saw
that
we
added
the
CIC
decal
from
label
and
I
think
that
issues
of
merge,
request,
labeled
with
CIC
decon
platform
and
thinking
who
direct
are
something
that
is
especially
important.
E
Okay,
so
this
is
something
then
we
can
schedule
it
and
we
might
still
need
to
be
so.
Basically,
this
feature,
it's
not
ready
to
pick
that
to
be
worked
on,
so
it's
still
a
generic
issue
right,
and
so
we
might
need
to
do
some
extra
investigation
so
because
it
is
still
in
planning
breakdown-
and
it's
probably
very
big-
has
a
very
high
way
and
needs
to
be
broken
down
into
yes,.
D
A
And
that's
the
issue
that
we
have
scheduled
weekly
calls
for
our
particular
movement.
I.
Think
the
last
time
we
managed
to
actually
make
some
good
progress
and
we
decided
that
every
one
of
us,
Fabio,
freaking,
uni
and
camera-
are
going
to
contribute
some
test
cases
so
that
we
could
actually
figure
it
out
once
the
internal
behavioral,
based
on
the
preconditions
and
the
intent
we
are
going
to
put
into
a
test
case
yeah.
A
A
So
what
what's
going
to
be?
The
outcome,
probably,
is
that
we
are
going
to
have
a
bunch
of
test
cases
and
that
are
all
just
understanding
of
how
it
should
behave
right.
So
I
think
that
it
would
actually
mean
that
we
are
making
good
progress
with
that
problem,
because
it's
not
real
one.
For
one
of
the
most
complex
aspects
of
partner
processing.
We
are
struggling.
H
H
We
still
need
to
go
back
through
most
of
the
tests
and
kind
of
use,
better
practices,
because
they
were
really
out
of
date,
so
they're
testing
implementation
details
such
as
like
computer
properties
and
methods,
so
we
need
to
get
around
to
making
those
tests
have
a
little
bit
better
coverage
by
testing
exactly
what
the
user
sees.
But
I'm
happy
about
this
one
I
was
very
glad
to
get
this
done.
I.
F
Don't
that
there
was
a,
it
was
a
very
interesting
discussion
and
I
think
it's
ongoing,
but
there's
a
there's,
a
different
approach
in
testing
and
like
there
are.
We
used
to
in
the
front
end
had
like
a
lot
of
testing
every
single
property
in
view
instead
of
testing
just
a
UI-
and
this
has
been
an
ongoing
conversation
but
moving
things
suggest
is
the
first
step
also
removing
a
lot
of
unnecessary
tests
that
are
testing
every
single
possible
outcome,
but
not
what
the
user
is
actually
seeing
and
then
we'll
clean
the
code.
F
Yeah
and
I
see
that
Sarah
is
very
happy.
So
basically
it's
been
an
ongoing
we've
been
talking
about
moving
the
pipeline.
They
got
two
graphical
or
ful
I,
never
which
one
people
prefer
to
say,
but
it's
probably
gonna
be
a
big
focus
in
q2
and
there's
a
lot
of
work
to
do
and
from
what
I'm.
From
my
understanding,
there's
a
lot
of
word
that
will
even
come
from
the
front
end,
even
in
terms
of
like
Ruby
and
querying
so
like
there
will
be
collaboration
with
the
back
end
for
sure,
and
what
from
what
I
understood?
A
F
That's
what
I
heard
yeah
then
we
would
do
some
Ruby
and
try
to
expose
some
of
them.
I
mean
I.
Don't
look!
That's
a
good
idea!
That's
what
I
heard
I
think
it
was.
The
idea
was
to
remove
some
of
the
logs
from
the
backend,
because,
since
moving
things
to
graph
UML
is
a
priority
for
the
front-end.
We
might
not
be
in
line
on
the
priority
and
that's
the
discussion
which
definitely
have
like
if
the
backend
has
their
own
priority
and
the
front-end
ask
different
like
implementing.
F
A
In
my
team
pipelines
and
point
is
one
of
the
most
complex
we
have
in
entire
code
base,
especially
because
of
the
amount
of
low-level
coding
that
we've
done
here,
to
make
it
more
performant
to
avoid,
for
example,
and
+1,
get
all
the
requests
and
stuff
like
that.
So
it
might
be
a
challenge.
I
don't
know
if
this
is
something
that
wouldn't
be
able
to
do.
A
H
Think
more
along
the
lines.
What
front
end
may
be
focusing
on
a
matter
percent
sure,
but
it's
more
of
like
the
schema
definition
for
like
what
we
need
from
the
front
end
and
defining
that
it
may
be
too
complex
for
front
enters
to.
You
know
hop
in
and
refactor.
You
know
something
to
use
graph,
QL,
you're,
yeah,
ci
back
and
we'll
know
more
about
that.
But
we,
the
thing
is
we
do
plan
to
do
it
very
in
iterations.
H
I
Also
I
think
I
think
it's
a
really
good
opportunity
for
us
all
to
work
closer
together.
I
think
there's
a
lot
of
space
for
front-end
to
do
some
stuff
and
get
feedback
and
iterate
that
way,
so
that
you
know
we
can
move
towards
it
and
back-end
can
like,
like
I,
think
we're
all
really
excited
to
like
learn
to
so
I
think
it's
like
a
good
opportunity
for
us
all
to
like
work
together
and
we're
friend
and
to
take
some
of
the
more
tedious
stuff
without
breaking
queries.
I'm
excited
about
it.
It's
a
dream!
Oh.
C
I'll
jump
in
a
little
bit
of
misty,
I.
Think
Sara.
Maybe
we
talked
about
this
this
week,
but
the
idea
that
that
has
sort
of
been
you
know
discussed.
It's
just
is
like
having
front-end
start
down
that
path
and
there's
gonna
be
some
things
for
sure
that
are
really
complex
and
that
we
have
to.
You
know
that
it
they're
just
gonna,
be
harder
harder
pieces
of
data
to
get
or
whatever,
but
getting
started
down.
C
That
path
is
gonna
at
least
make
some
progress,
and
then
you
know,
maybe
we
get
some
of
the
fields
in
there
that
we
need
and
and
then
others
or
we'll
have
to
you
know,
spin
out
a
task
for
that
or
something,
but
it's
well
we'll
get
we'll
get
farther
by
at
least
starting
on
it
and
then
and
then
figuring
out
where
the
the
challenges
are
as
we
go.
So
obviously
we're
not
gonna,
be
just
you
know,
merging
code
and
omastar
without
getting
some
some
sanity
checks
and
stuff
like
that.
F
It
might
be
a
good
idea
to
like
have
an
approach
for
whenever
we
one
of
them
some
end
something
we
opened
EMR
and
we
started
by
having
Nick
a
technical
discussion
with
a
front
end
and
a
back
end.
So
the
front
end
proposed
what
they
want
to
do.
Another
back
end
like
approves
or
say,
like
oh
you're,
gonna,
to
go
down
a
rabbit
hole
because
it
isn't
that,
and
so
we
can
have
a
very
early
conversation
without
in
that,
like
pretend
and
try
and
implement
that
specific
point,
and
then
that
get
the
perfume
after,
like.
C
C
C
E
E
A
E
E
Yeah
I'm
saying
like
whether
so
is
this
like
on
some
sort
of
epic
issue
or
and
maybe
we
need
to
create
a
like
an
issue
to
improve
test
cases.
And
so
then
we
can
close
it
and
say
for
the
thing
for
no,
we
just
closed
and
we
solve
the
problem
of
the
test
cases
and
then
continue
with
the
next
steps
or
or
we
just
use
this
issue
as
a
sort
of
issue
for
13.
E
A
C
I
think
that's
a
good
idea
at
a
minimum
having
an
issue
to
start
with
that
we
can
put
on
the
milestone
and
and
kind
of
track.
The
work
is
I,
think
important,
and
so
maybe
if
if
that
issue
is
a
brand
new
issue
that
we're
gonna
create,
let's
go
ahead
and
create
that
go
ahead
and
put
the
13.1
mile
stone
on
it
and
then
tag
Cheryl
and
I
and
tau
and
say
this
is
this
is
an
important
technical
bet
issue
for
their.
C
You
know
for
13.1
and
that
that
that
will
get
kind
of
into
the
into
the
view
for
for
the
planning
and
then
and
then
it's
up
to
us
to
talk
a
talk
with
product
about
it
and
make
sure
that
we
can
sort
of
you
know
expect
that
that
will
get
put
into
the
milestone,
but
I
think
that
I
I
think
it'll
be,
should
be
fine.
So,
but
we
don't
want.
You
know
a
hundred
of
those
coming
all
in
one
milestone,
because
we'll
have
to
to
figure
out
which
ones
we
prioritize
over
others.