►
From YouTube: 2021-03-17 Database Scalability Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
I
am
wife
is
recovering
so
she's
sleeping
right
now,
so
she
texts
me,
I
gotta
run
downstairs
okay,
she
had
foot
surgery.
So
basically
there's
like.
I
can't
have
anything
like
official
going
on,
but
you
know
I
can.
I
can
sneak
into
things.
D
Then
yeah
she's
doing
fine
she's
she's
a
classic.
You
know
post
post
surgery,
it's
a
foot
surgery.
So
it's
like
the
classic.
You
know
she's
off
of
the
anesthetic,
and
you
know
the
blockers
in
right
now
so
she's
like
it
doesn't
feel
that
bad.
C
Okay,
thank
you
everyone.
So
this
is
the
time
today
is
20
21
march
17th.
This
is
the
weekly
meeting
of
the
database
scalability
working
group.
Let's
get
started
start
from
the
what
has
been
done
in
the
past
week.
So
craig.
You
have
the
first
item.
A
Yeah,
this
was
an
action
item.
From
the
last
meeting
we
had
where
we
just
threw
an
issue
out
there
for
wild
ideas
for
the
ci
builds
table,
so
I
created
it
and
gregor's
has
already
commented
on
it
and
it's
there
for
people
to
brainstorm.
Once
we
get
past
the
primary
key
wall
so.
C
So
it's
been
quiet
in
that
issue.
Please
take
a
look
and
add
your
comments
there
to
get
it
going.
Everybody
on
this
working
group
I
mean
thank
you
craig.
Next,
three
items
were
from
me
and
eric.
Actually.
The
second
item
is
your
mr
for
the
exit
exit
criteria.
So
thank
you
for
proposing
that
and
thanks
to
jerry
to
merge
that
exit
criteria.
Mr
and
the
next
one
yeah.
C
E
But
like
this
is
important
stuff,
so
yeah
I
just
added
it
as
d,
but
let
me
remove
that
because
you
were
kind
enough
to
pre-populate
the
dock,
so
I
was
just
pulling
up
the
page.
Let
me
share
my
screen.
E
E
I
think
that
the
intent
of
this
to
be
transparent
is,
I
don't
want
to
have
the
repeat
experiences
like
we
feel
we
need
to
do
something
whether
it's
sharding
or
one
of
the
other
split
patterns
jerry
outlines
here
we
end
up
doing
something:
iterative
in
solving
short-term
problems,
and
we
take
our
eye
off
this
long-term
goal.
E
So
I
just
want
to
be
clear
of
like
we
put
this
out
there,
because
it's
very
difficult
to
satisfy
this,
maybe
impossible
without
doing
something
big
and
it
doesn't
mean
we
can't
iterate
iterate
like
there's
an
mbc
in
there,
there's
an
error
path
to
getting
there,
but
this
should
feel
a
little
bit
more
like
the
gcp
migration
project
than
the
last
database
working
group.
This
will
be
a
big
deal
and
then.
E
This
one
is
basically
about
robustness,
so
the
idea
would
be
like
you
know,
one.
The
the
failure
of
one
database
wouldn't
automatically
take
out
all
of
gitlab.com
so
limit
the
blast.
Radius
with
this
architecture
is
important
and
then
also
we
have
here
minimize
or
eliminate
the
complexity
for
a
self-managed
use
case.
We
talked
about
this
last
time,
but
it's
basically
like
you
know.
We.
We
have
a
unique
situation
where
we're
not
sas.
Only
we
ship
the
same
code
to
other
people.
They
run
it
themselves.
E
We
don't
want
to
have
two.
We
don't
want
to
fork
our
database
connector
and
therefore,
like
we're
testing
one
thing
on
gut.com
and
we
have
no
idea
what
we're
shipping
to
self-managed
users
a
different
scale,
there's
many
ways
to
solve
that
problem.
So
that's
why
it's
worded
as
reduced
or
limited
complexity.
E
We
talked
about
the
way
cytus
data
works
or
it
kind
of
it's
a
shim
you
put
in,
and
it
looks
like
the
application
is
talking
to
a
postgres
database,
but
it's
actually
doing
a
bunch
of
magic
between
itself
and
the
databases.
That's
one
way
to
manage
this
complexity.
There's
there's
others
I'll
talk
about,
but
we,
this
is
a
very
unique
problem.
We
are
an
open
core
sas,
most
sasses
are
not,
and
that
introduces
this.
This
unique
need
to
not
screw
up
the
self-managed
use
cases.
E
So
that's
what
that's
what
I
I
proposed
with
some
discussion
and
jury
merged
that
so
that's
in
the
in
the
page,
and
this
should
this
should
guide.
You
know
every
decision
that
we
we
make
throughout
this
working
group
any
questions
or
thoughts
on
that,
and
this
can
certainly
meet
and
evolve
the
attitude
through
throughout
time,
as
we
learn.
C
F
Yeah,
that
was
about
the
second
point.
That
does
mean
basically
where,
if
we,
if
we
want
to
separate
failure
modes
by
users,
we
will
have
to
split
by
users
on
some
level,
and
I
was
wondering
if
that
is
a
zx
split,
and
if
that
is
part
of
what
we're
aiming
for.
In
this
case,.
E
I
don't
think
it's
opinionated
in
that
regard,
I
mean,
regardless
of
the
this,
your
question
to
me
kind
of
gets
that
like
what's
our
shard
key,
but
regardless
of
which
way
you
shard
the
idea
is
that,
like
one
of
those
charts
can
go
down
and
it'll
automatically
wipe
out
all
of
its
neighbors,
which
is
kind
of
the
motor
in
today
being
being
uncharted,
the
primary
can
be
overcome
and
gitlab.com
is
suddenly
unavailable.
E
So
what
I'm
talking
about
is
a
world
where
we've
got,
let's
say,
say:
mini
shards
without
dictating
the
solution.
One
of
them
is
goes
down.
Data
on
the
other
shards
can
be
accessed
now,
whether
that's
a
specific
one
user
is
affected
when
user
isn't
that
assumes
we've
started
by
user,
but
we
don't
necessarily
have
to
do
it.
That
way,
it
may
be
that
all
users
can
still
access
their
issues,
but
no
users
can
access
their.
Mrs,
like
that
would
be
that
I'm
leaving
open.
That
makes
sense
that
makes.
F
F
D
I
apologize
and
I'm
just
I'm
just
seeing
this-
I
saw
a
couple
days
ago,
but
then
it
didn't
really
process
in
my
brain
this
first
time.
The
third
item
really
doesn't
feel
like
exit
criteria.
It
feels
more
like
just
like
architectural
guidance
associated
with
it.
D
Unless
we
did
something
like
where
we
did
something
on
sas
and
then
we
said,
yeah
we're
going
to
eventually
get
the
self-hosted
and
then
we
fix
it
up
eventually,
which
I
personally
would
be
against,
because
every
time
I've
seen
large
organizations
do
that
they
never
get
to
the
second
part
of
that
problem,
and
then
that
causes
forking
of
code
bases
and
other
other
bad
things
to
happen.
I'm
okay,
leaving
it
in
there
just
more
of
like.
Am
I
getting
the
sentiment
right
for
that
last
one
which
is
you
know
like.
E
Yeah
you're
you're
right
in
the
sense
that
like
if
we
get
to
the
point
where
we
want
to
check
this
box
and
then
we're
asking
the
question
of:
did
we
just
screw
up
self-managed
that
that
should
have
been
like
something
went
wrong
and
we
went
way
too
far.
So
this
is
it's
exit.
Carry
chair
is
an
absolute
backstop,
but
it
should
inform
very
early
decisions
that
we're
making
right.
D
E
D
G
Yeah,
so
I
I
wanted
to
ask
about
the
exit
criteria
number
one,
so
I
don't
know
how
many
daily
active
users
we
do
have
right
now,
but
do
we
need
to
wait
until
we
have
10
million
to
actually
know
that
we
met
the
exit
criteria
and
that
we
are
fine
with
with
this
goal.
E
It
looks
like
jerry's
answering
the
the
number
is
roughly
1
million
today.
I
think
the
we
don't
need
to
wait
to
close
this
out
until
we've
kind
of
proven
that
at
real
scale,
because
that
could
take
years,
it
could
happen
very
suddenly.
I
think
we
need.
We
need
an
architecture
that
we're
all
confident
could
do
that
and
if
it
needs
to
be
higher,
we
can
make
it
we
can
make
it
higher,
but
it
felt
sufficiently
ambitious
where
you
know
the
idea.
E
Is
you
you
look
at
what
we're
doing
here
through
this
lens
and
it's
like
the
idea
is
like
yeah.
This
is
we
can't
take
our
eye
off
this
ball?
It's
we
have
to
do
something
large
and
really
evolve
the
database
architecture
in
ways
that
we've
been
hesitant
through
positive
intent
and
by
using
our
iteration
value
in
the
past.
We
haven't
done
this
thing.
B
Yeah,
I
was
just
going
to
point
out
that
one
of
the
things
we're
doing
right
now
is
building
a
benchmarking
environment
and
right
now
we're
doing
it
to
validate
postgres,
12,
upgrade
and
and
some
other
things,
but
I
mean,
I
think
over
time.
The
general
idea
of
this
environment
is
I'm
just
going
to
dial
up
the
load
on
the
database
and
see
in
which
interesting
break
ways
we
break
it.
So
we
should
be
able
to
get
some
some
good
approximations
of
how
far
we
can
push
this.
B
We
obviously
have
all
the
data
for
what
the
database
is
doing
today
and
has
been
doing
for
for
a
while
at
levels
that
we
understand.
So
obviously
there
is
some
extrapolation
there,
but
we
should
be
able
to
validate
that
through
to
the
benchmarking
environment,
to
get
higher
confidence
on
on
what
the
architecture
is
going
to
do.
D
E
D
A
more
aggressive,
architectural
change,
it's
not
to
say
that
this
is
like
the
true
engineering
like
gotcha
moment,
where
we're
going
to
say:
oh
it
broke
sooner.
So
therefore,
you
know
because,
like
what
we
want
to
see
is
we
wouldn't
anticipate
that
there
are
things
that
are
going
to
break
sooner.
H
D
C
Thank
you
and
I
have
a
question
so
maybe
it's
a
good
idea
to
keep
it
ready
on
the
criteria.
I
was
thinking
about
some
kind
of
data
driven
and
more
breakdowns,
more
specific
goals
to
achieve
under
each
paradigm.
I
mean
those
three
are
the
paradigms
to
me.
So
just
a
thought
here
we
probably
want
to
keep
iterating
on
the
criteria,
but
this
is
a
very
good
framework.
In
my
opinion,.
E
Yeah,
I
think,
there's
there's
definitely
deliverables
under
these
things,
and
so,
if
we
we
should
track
them
somewhere.
We
don't
want
to
necessarily
over
complicate
the
exit
criteria
because,
like
they
they're
meant
to
be
the
north
star,
as
opposed
to
a
task
list,
but
yeah
open
to
open
to
ideas.
What's
worked
in
past
working
groups.
C
Thank
you,
okay.
Okay.
Next
item
is
also
me
just
a
quick
update
status,
two
work
streams
and
the
dris
were
merged
into
the
handbook
page
and
also
I
added
a
working
group
database,
scalability
label
and
builder
board
here,
to
show
all
the
outstanding
issues
at
the
moment.
So
thanks
for
using
that
and
when
you
have
issues
to
this
working
group,
please
label
accordingly
any
questions
moving
on
to
what's
happening
next,
so
craig,
I
guess
the
memory
team
update
is
from
you.
A
Yep
so
we're
gonna
start
working
on
the
composable
code
base
pattern
and
I'm
sorry
my
screen's
bouncing
around
a
little
bit.
The
splitting
application
into
functional
parts
is
the
first
step
stands
good
question.
It
helps
us
to
break
down
the
monolith,
so
we
can
kind
of
focus
in
on
areas
where
we
may
have
some
high
traffic
to
the
database.
A
It's
eventually
will
feed
into
the
data
application
layer.
That's
in
jerry's
blueprint
so
immediate
benefit.
Not
really
it's
building
blocks
to
get
us
there.
That
makes
sense.
A
That
being
said,
maybe
blocker
should
have
come
first.
The
the
memory
team
is
pretty
fully
invested
in
the
rapid
action
for
verified
database
queries
right
now
and
supporting
that
which
I
did
not
call
out
here.
I
called
out
some
other
blockers
database
team
is
fully
focused
on
the
primary
key
migrations
and
I
give
an
update
on
the
webhook
logs.
That's
the
one!
That's
going
on
right
now!
It's
about
37,
complete
that
one.
The
migration
is
being
done
in
the
old
style,
where
it's
a
serial
migrant
bullet
point
two
about
the
migration
helpers.
A
We've
built
some
helpers
that
will
help
us
to
tune
migrations,
so
we
can
configure
the
block
sizes.
So
sorry,
the
batch
sizes
on
migrations
and
we
can
do
them
concurrently.
So
it's
not
a
serial
migration
process,
I'm
working
with
giannis
and
fabian
to
reorganize
our
epics
that
are
supporting
all
these.
So
we
can
better
surface
where
we
are
on
migrating
all
the
primary
keys
that
are
in
jeopardy
right
now.
C
No
need
to
verbalize
my
question
as
long
as
you
are
working
on
that
yep.
A
Postgres
sql
12,
we
just
talked
about
that
in
the
gitlab.com
daily
stand
up,
we
will,
if
our
expediting
the
upgrade
to
pg12
database
team,
will
have
to
expedite
this
work.
The
sorry
explicitly,
marking
ctes
would
materialize
it's
not
a
significant
amount
of
work,
but
it's
you
know
it
will
offset
other
work
when
we
have
to
expedite
that
to
get
it
done
prior
to
the
upgrade.
A
If
we
do
expedite
the
upgrade-
and
we
may
reach
out
to
other
teams
to
help
once
we
set
a
standard
on
here's,
what
you
need
to
do
and
there
are
x
number
of
other
queries
that
need
to
do
the
same
thing.
We'll
ask
the
teams
to
help
out
if
we
need
to
delegate
and
then
potential
bottlenecks,
so
we
have
a
couple
paternity
leaves
called
out
there.
I
don't
know
what
else
we
need
to
call
out.
So
short
term
head
count,
reset
right,
christopher's,
question.
D
Yeah,
basically,
this
is
awful
because
it's
just
like
limited
visibility,
but,
like
I
was
talking
to
somebody
on
the
global
search
team
and
he
was
talking
about
disappointment
and
the
fact
of
we
haven't
seen
as
much
adoption
where
other
teams
are
picking
it
up.
So
I
was
like
thinking
to
myself.
Okay
of
the
burning
issues.
We
have
right
now
and
I
talked
to
nope
a
little
bit
about
global
search
as
well
and
he's
like.
D
Well,
you
know
it's
when
teams
decide
it's
important
to
them
and
it
feels
like
we've
got
it
into
a
pretty
decent
state
where
the
growth
seems
to
be
going
pretty
good,
independent
of
it,
and
I'm
just
wondering
whether
or
not
like
we
should
be
like
looking
at
like
teams
like
that,
where
it
makes
sense
to
even
pull
them
in,
for
you
know,
one
or
two
months
to
potentially
help
with
this
kind
of
stuff.
A
Sure
I
would
not
turn
down
more
help.
I
guess
the
only
question
would
be
expertise
right.
Is
there
a
postgres,
sql
expertise
on
my
team
that
could
jump
in
and
help
quickly.
D
C
D
A
And
well,
it
may
not
be
obvious,
but
db
maintainers
would
probably
be
a
good
place
to
start
as
well
see
if
there's
any
bandwidth
there.
F
C
Yes,
yeah?
The
there
were
two
incidents
recently
last
week
and
this
monday
both
are
on
mondays
that
the
database
situation.
C
So
I
think
everybody
here
should
be
aware
of
those
incidents
and
we
are
running
daily
stand
up
to
mitigate
that
risk.
B
B
C
Yeah
and
we
do
have
a
few
items-
update
updating
there
like
pk
primary
key
overflow
progress
as
as
a
midterm
or
long
term.
I
mean
this.
Whole
group
is
a
long
term
in
that
daily
standard.
C
Okay,
no
more
questions
about
blockers.
Then,
let's
move
on
to
discussion
items
and
yes,
you
have
the
first
one.
F
Yeah
this
came
from
last
week.
Basically,
we
talked
about
it
a
bit.
We
are
basically
talking
about
introducing
a
data
access
layer
in
the
application,
and
one
of
the
concerns
that
we
discussed
is
like
do
we
need
to
treat
all
the
data
stores
that
we
will
have
as
a
single
entity
and
what
does
that
entail?
And
one
of
the
concerns
that
I
have
is
basically:
are
we
trying
to
hide
that
complexity
from
a
developer,
and
should
we
do
that?
F
E
E
It's
already
a
pretty
complex
thing
to
contribute
to
so
that's
something
we
should
definitely
consider
sorry
are
we.
Can
everyone
still
contribute
when
we
finish
this
work?
That's
really
important.
Generally,.
F
I
wonder
if
it
actually
gets
easier.
If
we
have
this
one
data
access
layer,
because,
as
as
easy
as
it
might
look,
you
might
easily
screw
up
things
when
you
combine
different
data
stores
that
you're
not
supposed
to
work
with
whatever
maybe
are
not
as
well
covered.
So
thinking,
it's
very
similar.
F
Basically,
it
allows
you
to
interact
with
the
database
without
knowing
much
about
it.
At
the
same
time,
it
does
hide
the
complexity
from
you,
but
it
doesn't
prevent
you
from
still
running
into
those
problems.
So
I
don't
know
if
we
need
to
hide
all
of
this
complexity
from
a
developer.
B
So
I
think
it's
it's
an
item
that
needs
further
discussion.
Obviously
I
suggested
it
in
the
original
write-up
because
it
is
how
I've
seen
this
problem
solve,
because
once
you
start
having
multiple
data
stores,
then
expecting
400
developers
to
truly
understand
which
store
is
which
it's
it's
going
to
get
tricky,
and
once
you
start
having
this
multiple
layer
stores,
then
caching
becomes
like
this
integral
part
of
the
application
right
and
so
take
our
case
where
there
are
all
the
stage
groups
are
doing
things
to
the
database
again
once
it
starts.
B
You
know,
once
we
start
pulling
strands,
then
we
should
avoid
having
one
team
solving
this
problem,
one
way
and
another
team
solving
the
spawn
this
other
way.
So
again,
this
is
based
on
prior
experience.
I
am
by
no
means
a
software
architect,
but
I've
been
close
enough
to
people
writing
these
types
of
systems
to
understand
what
they're
trying
to
do.
B
I
think
it's
it's
one
of
the
bullet
points
and
it's
something
that
I
think
was
assigned
to
camille
to
think
about
and
explore.
So
it's
probably
worthwhile
seeing
what
what
his
thoughts
his
initial
thoughts
are
as
a
dri
on
this
aspect,
and
maybe
the
determination
is
no.
We
don't
need
to
do
this.
Okay,
so
I
don't
know
if,
if
we
can
answer
most
of
those
questions
right
now,
but.
E
C
Okay:
okay:
let's
move
on
to
the
second
discussion
item,
so
I
have
a
question:
how
do
we
identify
the
next
two
patterns
for
the
first
iteration
jerry?
You
have
some
thoughts
to
share.
B
Yeah,
so
my
feedback
with
that,
as
we
focus
on
reap
mostly
on
time
decay,
mostly
because
it
seems
to
be
a
relatively
easy
problem
and
time
decay,
because
it
is
an
active
problem
with
ci
builds.
I
would
rather,
we
don't
try
to
like
move
across.
You
know
eight
different
lanes
trying
to
do
too
much.
Also
solving
this
is
going
to
give
us
some
data
as
to
do
we
need
a
data
layer
or
not.
B
A
So
I
added
a
couple
comments
there.
I
think
some
of
the
read
mostly
work
is
in
flight
with
the
rapid
action
and
I
called
out
the
issue
for
the
sidekick
jobs
that
the
memory
group's
working
on
right
now
time
decay.
We
are
addressing
some
that
started
that
down
that
path
with
web
hook,
logs
primary
key
migration
and
partitioning
combination,
and
then
I
know
some
discussions,
and
it's
mostly
brainstorming
at
this
point
in
time
with
the
ci
builds,
is
underway.
A
F
Yeah
thanks
stopping
slowly
would
it
make
sense
to
get
a
sort
of
a
summary
for
the
time
decay
approach
that
we're
discussing
for
report
logs?
F
This
is
a
problem
that
we
currently
have
web
book
logs
has
a
retention
strategy,
but
it
struggles
to
keep
up,
so
we
have
much
more
data
than
we
actually
need
to,
because
we
failed
to
delete
it
basically
in
time,
and
that
is
a
pattern
that
you
can
that
we
can
apply
more
universally.
Basically,
so
we
can
have
a
short
summary
on
that.
That's
helpful.
B
B
We
should
pay
off
our
debt
before
we
start
thinking
about
scaling
right,
because
if
we
scale
you
know
if,
if
we
all
agree
that
that
table
has
a
bunch
of
stuff
that
shouldn't
be,
there,
then
scaling
that
and
making
it
distributed
in
some
way
is
just
distributing
that
junk
around
and
then
it's
going
to
become
a
bigger
brand
to
solve
later.
I
think
so.
C
Okay,
yeah
agrigaard.
You
want
to
verbalize
your
yeah.
G
So
we've
had
like
a
lot
of
discussions
about
ci
builds
and
our
strategy
for
partitioning
this
table.
There
are
two
ideas
like
the
time
decay
solution
has
been
mentioned
today
and
another
one
is
partitioning
by
project
and
namespace,
and
I
I
think
this
is
interesting
discussion
that
is
not
only
relevant
in
case
of
ci
bills,
but
might
actually
be
like
a
problem
for
other
teams
later
time.
G
Decay
is
interesting
because
it's
aligned
with
the
working
group
goals
it's
easier
to
actually
separate
data
to
a
different
data
stores,
introduce
different
data
stores
that
are
like
much
smaller
and
less
powerful
for
like
old
data
but
partitioning
by
project
and
namespace.
It's
easier
in
the
sense
that
it
might
require
much
less
product
changes,
and
I
just
wonder
if
there
is
some
heuristic.
We
can
apply
here
to
make
the
decision
about
choosing
either
over,
like
one
over
another
easier.
C
Thank
you
nick
yeah.
H
I
just
yeah
had
a
point.
I
know
it's
not
the
most.
There
are
more
important
considerations
here,
but
something
to
note
is
that
project
namespace
could
have
some
some
benefits
for
self-managed
geo
customers
and
potentially
geon.com
in
the
future.
E
Yeah-
and
I
put
in
I
know
something
that
keeps
coming
up
and
won't
go
away.
Is
this
issue
of
data
residency,
for
instance
like
gdpr
european
customers
wanting
their
data
in
in
german
data
center?
For
instance,
we
decided
not
to
complicate
this
working
group
to
tackle
at
the
same
time,
but
we
should
think
about
like
if
that's
next,
we
shouldn't
do
anything
that
makes
that
harder.
If
we
have
a
chance
to
choose
one
shark
key
versus
another
for,
for
instance,
or
whatever
it
is
so,
maybe
I'll,
maybe
I'll
add
a.
E
I
think
I'll
do
a
another.
Mr
for
exam
criteria.
Add
my
suggestion
up
above
and
then
just
put
something
really
general
here
of
like
don't
don't
make
data
residency
any
harder
than
it
is
today,
not
the
best
to
guide
every
decision
we
make.
I
think
we're
gonna
ask
product
in
parallel
to
like
work
on
more
requirements
for
that,
because
there's
data
at
rest
data
transit.
What
geos
are
actually
valuable?
What
does
performance
mean?
E
So
we
won't
pollute
this
working
group
with
that,
but,
like
we
just
need
to
be
aware
that
that
is
likely
coming
and
we
shouldn't
really
corner
ourselves
with
it
with
a
a
b
decision
we're
making
here
where
we
feel
like
it's.
Well,
it's
six
of
one
half
dozen
of
another.
Let's
do
this
like
this
could
be
a
tie
breaker
for
some
some
decisions
that
seem
unequal
footing.
D
The
one
aspect
I
can
see
is
that
the
long,
the
residency
of
users
and
the
length
of
time
that
they've
been
using
the
product
is
direct
can
be
directly
proportional
to
the
scale
and
it
really
feels
like
we
need
the
time
decay
policies
kind
of
for
various
pieces
of
this,
like
what
are
the
expectations
that
you
can
see
a
build
from
10
years
ago
or
is
it
you
know,
you
can
only
see
like
for
the
last
two
years
short
of
you
having
to
go
for
into
like
a
glacier
or
some
kind
of
offline
kind
of
environment
to
basically
pull
that
information.
E
Yeah
so
jerry-
and
I
chat
about
this
highest
slack,
so
we
should
I
have
to
run
to
another
meeting,
but
we
should
just
find
the
right
level
to
not
let
this
distract
from
what
we're
trying
to
do
here,
but
also
let
it
inform
like
any
anything
or
it's
like
an
eb
decision.
Let's
just
take
this
into
account,
make
sure
we
don't
make
things
much
harder
for
us
imminently.
Okay,.
B
Yeah,
I
just
want
to
point
out
that
the
things
that
I
wrote
on
the
original
mark
for
the
working
group
are
proposals
or
guidance
if
there
are
other
ways
to
do
things
by
all
means,
let's
put
another
word
together
and
do
things
differently.
So
just
because
I
said
time,
decay
please
do
not
assume
oh
time
decay.
It
is
if
there
are
other
ways
of
solving
problems
by
all
means.
Let's
just
do
it
differently,.
C
C
Thank
you,
we'll
move
other
items
to
next
meeting.
Thank
you.
Everyone
for
today
enjoy
the
rest
of
your
day.