►
From YouTube: 20180822 sig cluster lifecycle kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
B
I
I'm,
not
sure
that
we
can
complete
the
resolve
all
the
dependencies
between
the
two,
the
two
activities,
but
this
is
not
a
problem
because
you
can
work,
you
can
do
join
me
knows
me
knows:
control
master,
even
without
the
change
of
the
coughing
by
using
flex,
and
it
would
be
really
useful
for
me
if
someone
can
start
testing
on
this,
because
basically
I'm
the
only
one
who
tested
this
feature,
we,
sadly,
okay,
next
topic
is:
please
go
ahead.
Sadly,.
A
We
had
a
resource
dedicated
to
testing,
but
he's
been
diverted
I,
don't
know
if
we
will
have
a
dedicated
person
to
test
and
we
don't
actually
have
automation
in
place
to
stand
up
to
H
a
configuration
for
112,
so
we
can
try.
We
need
to
do
a
refresher,
the
documentation.
This
is
part
of
the
refresh
the
documentation.
It
should
force
exercising
this
particular
path.
A
A
B
A
Fine
to
me,
if
we
want
to
just
get
it
in
there
for
the
time
being
and
then
because
it's
experimental,
we
should
have
resources
and
113,
at
least
from
our
side
dedicated
to
testing
spinning
back
through
I.
Don't
know
if
other
folks
actually
have
folks
who
can
exercise
the
entire
rig
Amuro
they're.
Also
in
113
we
will
rejigged
the
deployment
from
using
kubernetes
anywhere
to
use
in
cluster
api.
So
long
as
I'm,
I'm
gonna
force
that
so
that
I'm.
Sorry,
that's
for
the
testing!
That's
for
the
default
deployment
for
the
testing
harness!
C
D
A
A
D
A
D
You
Fabrizio
good
I
haven't
had
a
chance
to
look
at
the
specs.
You
have
to
supply
a
external
load
balancer
for
the
control
plane
join.
B
Yes
and
next
time
on
a
2-cd.
D
B
A
Thanks
for
all
the
work
for
meat
sale
and
seeing
this
through
I
know,
it's
taken
several
releases
to
get
this
all
the
way
through,
and
it's
dated
back
to
like,
like
a
bunch
of
configuration,
changes
that
had
occurred
across
several
cycles
to
make
it
possible
and
not
onerous
right
to
do
this.
So
I
appreciate
the
patience
and
the
fortitude
and
seeing
this
through
across
several
releases.
D
Yeah
I
put
this
bullet
point
on
here,
because
I
did
reach
out
to
Mike,
since
we
were
talking
about
making
dropping
this
feature
yesterday
and
Mike
gladly
decided
to
join
the
call.
So
I
just
wanted
to
give
him
a
forum
to
talk
about
how
they're,
using
self-hosting
on
their
clusters
at
Oak
Auto
some
pretty
interesting
news
cases.
There
yeah
sure.
E
E
Details
escape
me
for
this.
This
we
have
to
do
slightly
bit
of
our
own
pivots,
because
we
run
your
own
manifests.
We
run
API
servers
as
deployments
floating
across
the
entire
cluster,
along
with
all
the
other
control
plane
components.
E
A
So
the
one
thing
that
we're
doing
just
so
there's
clarity
here,
which
future
probably
login
issue
for,
is
that
we
just
don't
want
couvade
again.
We
think
that
this
piece
of
pivoting
and
management
of
it
is
separable
right,
so
yeah
it
doesn't
necessarily
belong.
It
could
be
improper
and
it
actually
complicates
a
bunch
of
workflow
issues
with
both
upgrades
as
well
as
control
plane
join
as
well
as
other
different
operations
that
Commedia
needs
to
support,
so
we'd
absolutely
welcome
like
because,
because
it
is
a
sig
and
the
sig
offers
many
tools
right.
A
This
is
the
Covidien
officers,
but
the
singing
general
would
gladly
support
a
pivot
or
tool
that
would
take
it
given
manifest
and
do
the
exact
same
routine,
and
we
could
even
take
the
code
out
of
rubidium
proper
into
its
own
utility
to
do
those
types
of
right.
So
we
don't
want
to
prevent.
We
don't
want
to
say
we're
not
endorsing
it.
We're
just
saying
that
we're
removing
it
from
committee
and
proper
to
simplify
the
control
path
and
should
other
people
want
to
do.
A
A
E
D
A
From
my
understanding,
boot
cube
is
dead.
Beef,
no
I,
don't
exactly
know
how
it's
going
to
live
on
and
support
way
forwards.
Now
that
that
world
is
changing
still
and
the
story
is
changing
so
I'm,
there
are
no
representatives
that
I'm
aware
of
on
this
call
who
actually
represents
that
space
anymore,
so
I
don't
want
to
speak
to
it.
All
I
know
is
that
some
things
have
been
deprecated
and
I.
Don't
know
the
state
of
that
stuff
anymore.
A
B
E
The
upgrade
side
of
things
we're
less
interested
in
because
once
we've
done,
the
Kaveri
and
bootstraps
pivots,
self-hosting
they're
after
our
entire
control
plane
is
just
one
the
rest
of
the
application
deployments.
So
we
just
up.
We
just
do
upgrades
by
doing
to
keep
CTL
apply
and
the
cluster
orchestrates
the
upgrade
for
us,
and
you
just
manage.
D
E
E
B
D
So
the
reason
why
I
reset
the
mic
is
just
like,
after
hours
of
conversation
at
cook
on
Copenhagen,
it
was
pretty
easy
to
identify
that
they
they're
doing
some
pretty
novel
things
in
this
space.
They
were
early
adopters
if
we
could,
before
they
migrated
to
Covidien
I,
think
Mike
and
his
team
have
a
lot
of
authority
in
regard
to
self
hosting
I.
D
You
know
like
there's
business
priorities
and
everything
right.
That's
like
the
whole
thing,
with
open
source
these
days
is
how
do
you
fund
all
of
this
work
but
yeah?
If
you
want
to
continue
to
be
a
voice
in
regard
to
the
self
hosting
story,
we're
happy
to
be
a
home
for
you,
cluster
lifecycle,
it's
absolutely
the
place
to
advocate
for
a
self
hosting
story.
It
sounds
like
you
are
you're
accepting
of
dropping
50
pair
from
Adm
on
the
reasoning,
I'm
personally
I'm,
not
convinced
to
that.
D
This
wouldn't
be
equally
as
maintainable
as
like
a
separate
command
line.
Entry
point
in
Covidien,
but
we
could
probably
chat
about
that.
Like
I,
don't
know
if
this
has
to
be
extracted
out
into
into
its
own
tool,
to
be
maintainable
extracting
into
its
own
tool
would
require
probably
another
repository
or
something.
It
would
probably
allow
us
to
release
on
a
higher
frequency
cadence
for
that.
Well,.
A
I'm,
okay,
with
having
a
separate
sub
command
for
this.
To
maintain
for
the
time
being
already
exists
in
phases.
I
just
want
to
make
sure
that
the
standard
upgrade
path
and
joint
path
are
clean
because
the
problem
with
the
join
and
upgrade
for
control
plane
additions
became
really
weird
with.
For
me
to
his
original
PR.
D
E
D
B
D
B
A
D
Yeah,
it
sounds
to
me
like
the
concise
conclusion
is
that
we
want
to
duplicate
the
in
it
upgrade
feature
flag
and
excise.
The
bit
code
from
the
critical
path
for
a
normal
cube,
ATM
usage,
will
leave
the
code
in
phases
recognizing
that
we've
identified.
That
phases
are
useful
for
a
couple
of
ad-hoc
user
actions
that,
in
the
future
of
beyond
the
112
release,
may
be
given
their
own
person
class
sub
commands
when
we
change
the
UX
event,
and
also
thanks
for
joining
the
call
like
think
we've.
D
D
C
A
Should
be
the
release
notes,
it
was
already
an
alpha
feature.
We
we
hold.
We
reserve
all
rights
around
alpha
features.
So
as
a
result,
you
know
we
can
just
make
sure
that
the
release
note
for
this
particular
one
says
you
can
still
use
it
by
using
the
alpha
phase
and
sub
commands.
So
as
so
long
as
that
exists,
then,
then,
should
be.
Okay.
A
But
again,
you
know
we'll
we'll
have
to
just
make
sure
that
the
release
notes
are
clear
and
we
go
through
with
a
fine-tooth
comb
and
add
extra
details
there.
But
it's
it's
already
been
stated
that
we've
removed
all
alpha
documentation
from
the
main
line
for
the
main
line
documentation.
So
people
people
have
had
to
have
known
and
understood
and
hopped
it
into
some
of
the
stuff.
C
D
A
That's
a
part
of
it.
There's
also
the
implications
of
supported,
check,
pointing
and
there
arts
we
don't
have
an
official
path
right.
So
there's
a
whole
state
space
of
things
that
exist,
but
there's
nothing.
That's
actually
fully
supported.
There's
no
there's!
No,
and
even
if
every
single
choice
we've
we've
walked
down,
has
a
bunch
of
security
implications
right
boot
Cube
chose
to
go
a
certain
route
that
still
has
security
implications.
A
The
other
choices
that
we
went
were
had
less
of
them,
but
wasn't
didn't
solve
all
the
problems
right,
so
so
at
this
time,
because
it's
complicated
and
because
the
cost
benefit
of
doing
some
of
these
other
features
and
we
want
to
get
rubidium
to
GA.
First,
it
makes
a
ton
of
sense
to
simplify
the
logic
in
cubm
and
to
just
get
the
other
features
in
that
we
would
like
to
have
to
get
to
GA
and
then
should
folks
really
want
this
feature
in
the
community.
A
B
D
B
So
I
wrote
in
the
meeting
notes
Nutella
status
about
how
is
going
to
change
on
on
the
config
file
and
I
brought
also
to
two
main
decision
or
point
that
I
want
to
share
with
the
sig.
It
is
the
first
one
is
that
for
the
we
are
not
going
to
cut
the
1
beta
1,
but
we
will
complete.
We
will
go
with
P
1
alpha
3.
B
B
Managing
additional
the
the
different
scenario:
upgrading
from
111
to
112,
a
in
the
from
from
the
tool
from
a
we
won.
We
won
alpha
2,
we
won
alpha
3
or
we
won
Alpha
afraid
we
want
half
a
tree
because
there
are
will
be
different
and
the
third
thread
of
activity
is
called
cleanup.
So,
after
all
all
the
changes
to
the
API
there
is
change
chance
room
for
including
distr.
The
tread
is
kind
of
an
older
because
we
are
waiting
for
the
first
to
toe
to
complete
and.
A
Going
to
walk
through
Ross's
change,
okay
as
soon
as
I
can
today,
the
problem
with
massive
API
changes
is
their
massive
and
there's
a
lot
of
finagle
e
things
and
some
machinery
that
I
don't
quite
understand.
Why
they're
there?
To
be
honest,
we
have
we
have
some
custom
jiggery
around
fuzz
testing
and
API
conversions
that
doesn't
exist
in
some
other
API
machinery
stuff.
So
I
I
need
to
dig
through
the
details.
B
I
learned
that
a
lot
hindered
after
two
weeks,
so
if
you
want
I,
can
help
you
and
yeah.
There
is
this
PR,
which
is
blocking
the
others
change
and
then
I
don't
know.
If
there
are
no
other
topic
to
discuss.
I
used
like
to
take
the
opportunity
to
ask
some
opinion
about
the
API
changes,
and
so
I
can
have
some
we
can
do
some
could
Co,
design
or
I
can
add
some
early
feedback
before
sending
the
pr's.
If
we
did
that
are.
A
D
B
I
can
explain.
Basically
we
are
splitting
in
any
configuration
into
any
configuration
cluster
configuration
this.
This
PR
is
a
set
of
refactoring
that
that
change
function,
which
are
using
you
need
to
configuration
to
use
only
cross.
The
configuration
why
I
put
it
I
put
on
hold
on
on
this
PR
is
because,
due
to
the
change
that
are
still
missing,
there
are
some
change
in
this
PR
that
must
be
reflected
if
we,
if
they
go
in.
B
A
A
lot
of
context
here
with
regards
to
why
we're
shifting
the
configuration
and
separating
pieces
out
part
of
it
has
to
do
with
dependency
and
only
having
the
pieces
you
need
for
given
operation
use
the
configuration
that
is
required,
so
the
join
only
has
its
configuration,
but
there's
also
a
separation
of
concerns
that
was
invoked
as
part
of
this.
So
that
way
the
configuration
was
broken
apart,
because
originally
the
master
configuration
actually
subsumed
the
sub
elements
of
the
component
configuration
for
other
things.
That
was
part
of
the
one
I
won
ten
breakage
that
that
occurred.
A
D
B
This
is
a
good
point
about,
and
there
was
two
or
two
bigger
concerns
to
be
addressed.
The
first
is
component
config.
Basically
before
we
were
embedding
component
config
in
our
master
configure,
but
being
each
component
config
version
it
and
managed
by
other
team.
B
B
If
you
think
that
we
will
have
a
component
confi,
not
not
only
to
component
coffee
but
more
so
the
first
test,
the
first
big
reason
for
our
API
redesign
was
to
Ava
a
lazy
dependencies
between
our
API
and
the
component
config
API.
This
was
the
the
first
main
reason
for
the
design.
B
The
second
main,
the
second
problem,
was
that
in
the
master
configuration
we
were
conflating
different
center
of
information,
like
the
information
about
the
node
that
that
where
we
were
initializing
the
cluster
and
the
information
that
applied
to
the
whole
cluster
and
and
basically
now,
we
need
more
instances
of
the
information
regarding
to
the
master
node.
Otherwise,
we
were
not
capable
to
provide
a
sustainable
story
for
the
upgrades
in
a
in
an
exchange.
B
A
I
also
want
to
make
sure
that,
like
those
are
the
reason,
it's
part
of
the
reason,
there's
also
a
reason
around
the
grand
unification
of
component
config
and
that's
this
whole
separate
working
effort.
That
Lucas
has
been
pushing
really
hard
on,
even
though
he's
somewhere
in
the
woods
there
that
that
plus
I
also
want
to
push
back
against
the
assertion.
The
and
that
of
I.
Don't
know
what
the
exact
excerpt
but
I've
got
like
a
decade
plus
in
cluster
manager
and
I
hate
using
that
term.
A
We
want
to
make
sure
that
we're
not
over
allowing
dependency
graph
proliferation,
for
you
know
so
say
if
you
want
to
change
something
else.
That's
totally
should
be
totally
unrelated
from
from
A
to
B,
or
we
would
think
that
right
we
do.
We
don't
want
to
have
this
separate
component
way
down
in
the
graph
depending
upon
something
up
the
stack
and
this
this
is
analogous
to
a
bunch
of
things
inside
of
board
itself,
where
people
develop
people
dependent
on
labels
right.
D
A
I
think
we
need
to
be
careful,
though,
to
I
think
your
point
should
go
unheard.
I
think
we
need
to
be
mindful
that
we
give
enough
information
down
the
stack
and
we're
clearly
thinking
about
it
over
time.
So
that
way
we
can
refine
the
scope,
because
we
don't
want
to
have
ridiculous
pain
points
in
the
code
where
you
need
information
in
order
for
you
to
get
it,
you
have
to
plumb
all
the
way
up
and
all
the
way
down
again
right.
That's
the
contrary
thing,
the
other
side,
I
am.
A
G
I
was
just
gonna
say
we
need
to
be
careful
about
how
we
handle
stored
config
as
well,
because,
right
now
we
don't
have
a
way
to
actually
mutate
a
running
cluster
outside
of
an
upgrade.
So
if
people
actually
make
changes
to
components,
they'd
be
making
changes
to
the
actual
components
on
disk.
Instead
of
on
the
stored
config
that
we
have
so
I
think
in
general,
we
need
to
be
cautious
about
relying
on
you
know
any
stored
previous
config,
because
it
doesn't
necessarily
represent
the
actual
state
of
the
system
unless
we're
doing
some
type
of
reconciliation.
G
E
A
D
B
Configure
is
a
cluster
wide
for
the
Cooper
proxy.
You
have
the
configuration
of
all
the
Cooper,
which
is
shared
between
all
the
instance
and
the
same
goes
for
for
kulit.
Then
you
can
have
Eastern
settings
which,
which
is
one
of
the
problem
of
the
VM
model,
but
but
just
to
answer
to
the
note
from
JSON.
G
So
there
could
also
be
situations
where
it's
not
the
the
semantics
around
the
upgrade
aren't
necessarily
around
the
component
config
as
well,
for
example
like
right
with
to
proxy
right
now,
there's
a
workaround
that
you
need
to
do
in
some
environments
to
where
you
have
to
use
a
NIC.
You
have
to
add
in
a
NIC
container,
to
kind
of
modify
the
actual
component.
Config
to
you
know
be
on
a
per
host
basis
to
override
the
the
hostname.
G
A
That's
part
of
the
word
change
to
get
it
into
component
config,
especially
for
the
proxies
to
fix
these
problems,
because
there's
there
are
separate
issues
with
regards
to
the
how
we
unify
handling
of
config
command
line
arcs
across
the
different
components
because
of
this
issue
that
occurred
not
just
on
the
proxy,
but
it
actually
had
occurred
in
the
cupola
too.
Before
yes,.
B
In
the
new
API
designed,
it
will
be
simple,
simpler
to
have
configuration
per
distance,
but
for
the
Cooper
proxy
there
is
also
an
issue
that
the
Cooper
proxy
does
not
allowed
to
mix
flakes
and
config
so
from
outside.
We
are
moving
in
that
direction,
but
also
to
a
proxy
must
be
ready
to
set
this
like.
It
is
cougar
at
now.
For
instance,
yeah.
A
Yeah,
sadly,
I
don't
know
how
to
solve
this
problem
other
than
I
wish.
Every
component
would
get
together
and
buy
into
the
whole
process
and
have
an
effort
across
all
SIG's
to
make
this
unified
going
forwards.
But
Lucas
is
doing
a
superhuman
effort
right
now
to
try
and
make
this
go
with
the
API
machinery.
Folks.
C
H
B
So
this
era
tried
to
summarize
what
we
are
doing.
This
is
a
current
status.
Now
now
we
move
that
from
the
original
master
configuration
and
basically
we
have
in
it
configuration
where
we
have
booster
token,
which
is
basically
runtime
attributes
so
attribute
that
are
used
by
cubed
Amin
in
it,
but
not
part
of
the
cluster
or
the
node
configuration
that
will
be
stored.
B
Then
we
have
these
know
the
registration
option
that
basically
is
on
the
node
or
the
cooperage
specific
setting,
and
now,
when
the
P
R
merges,
we
have
çal
cluster
configuration
that
contains
everything
else
and
but
in
the
design
should
be
everything
that
should
be
applied.
The
cluster
level
is
that
enough,
clear
question
about
where
we
are
now
how.
D
A
Problem
that
we
had
before
was
because
we
can
flip.
This
is
kind
of
the
separation
of
concerns,
and
what
part
of
the
reason
why
we're
doing
it
probably
had
before
is
like
say,
for
example,
with
that
proxy
issue
is
that
we
absorbed
it
and
had
it
in
our
config
and
part
of
the
machinery,
for
the
upgrade
did
not
do
the
versioning
of
that
piece.
So
separating
out
that,
having
that
soft
dependency
allows
us
to
manage,
and
do
they
do
the
API
machinery
for
just
our
stuff,
so
I.
D
Am
on
the
same
page,
there
all
I'm
asking
maybe
a
simpler
question,
which
is
that
say
even
between
two
version,
two
of
the
same
versions
of
cluster
configuration
in
upgrade
from
111
to
112
or
112
to
113.
The
API
servers
are
going
to
have
configuration
drift,
as
the
rolling
update
is
applied
to
the
cluster
and
I'm
curious.
How
that
is
how
that's
supposed
to
be
represented
or,
if
there's
a
plan
to
represent
that
that
drift,
like.
B
You
are
reading
the
the
config
map
and
then
you
pass
the
config
map
to
the
API
machinery
and
they
by
machinery
automatically
converted
to
the
latest
they're
shown
in
the
in
the
codebase
to
the
internal
they're
shown
of
the
API
in
the
codebase.
And
when
you
save
the
API
the
conflict
map
of
Baker,
it
will
be
sterilized
in
the
latest
version.
So
during
the
app
data
you
read
the
baby
old-fashioned
new
converter,
using
the
new
one.
You
use
the
new
and
you
save
anyone.
B
D
A
Those
args,
if
you
do
custom
args,
because
we
eliminated
in
111,
we
Alyssa,
we
eliminated
a
ton
of
arguments
that
were
plumbing
through
Kubb
ADM,
to
have
custom
arguments
into
the
Canadian
config
that
we're
basically
overrides
to
the
API
server
or
their
components.
In
111,
we
eliminated
a
ton
of
that
stuff
to
only
have
the
what
we
needed
to
trim
it
down
and
to
prevent
the
proliferation
of
configuration
knobs.
So
we
can
move
to
beta
and
then
a
ga.
So
if
you
do
custom
args
passing
through
the
responsibilities
on
you.
D
That
cluster
configuration
I
guess
what
I'm
getting
hung
up
on
is
that
it's
supposed
to
be
a
representative
of
the
entire
cluster.
But
during
an
update
operation
like
as
you're
rolling
things
out,
it's
it's
only
a
progressive
like
representation
of
the
actual
state,
because
there's
drift
so
I'm
just
wondering
how
like,
if
there's
any
plan
to
represent
that
or
handle
that.
So
the.
A
Grand
unified
component
config
for
API
server
has
been
a
long-standing
gripe
for
the
entire
community
if
they
were
to
do
that
and
actually
have
proper
versions
managed
so
and
the
component
config
for
the
API
server,
it's
a
win
win
but
I,
don't
know
when
we're
gonna
get
there
that's
the
ideal
scenario,
because
then
they
support
their
own
upgrading
across
No.
The
configuration.
A
B
Another
try
to
clarify
said
I
think
that
the
only
real
supported
scenario
for
upgrades
is
that
the
an
upgrade
the
doesn't
change
and
feature
any
attributes
and
in
flag
this.
The
only
up
trade
part
that
we
are
testing
them,
then
by
chance.
I
think
that
the
only
exception
I
mean
I
am
remember,
and
is
that
you
can
change
the
future
flags
I.
D
B
But
what
happened
during
an
update-
and
this
give
you
a
kind
of
backdoor
for
doing
all
sort
of
change,
doing
an
upgrade.
Basically,
you
recreate
the
control
plane.
So
if,
before
doing
the
trade,
you
change
the
sound
configuration
in
your
config
marks
before
upgrading,
then
the
receivers
create
the
new
control
plane
instance
with
the
new
settings.
B
A
Seems
totally
fine
to
me
the
I
think
once
once
people
go
into
custom
arguments
that
are
passed
through
and
if
the
API
machinery
I
mean
it's
part
of
the
release,
notes
right.
So
the
release
note
for
the
API
server
will
specify
like
these
arguments
have
been
deprecated
and
these
other
ones
are
taking
its
place
for
this
features
been
deprecated
and
moving
over
to
here,
because
we
offer
that
plum
through.
They
can
always
update
the
plumbing
on
upgrade
I.
D
B
C
Excuse
me:
I
confuse
destry's,
okay,
yeah.
My
point
is
that
this
kind
of
late
in
the
cycle
to
make
such
changes
and
I
guess
we
should
continue
with
what
we
have
I
mean.
The
proposal
has
been
there
like
the
whole
cycle.
Basically,
and
we
didn't
get
that
much
feedback
from
others,
so
I
think
we
should
continue.
Basically,
we
are
out
of
time
for
this
meeting
almost.
A
B
Yes,
I
think
that
the
next
change
is
that
two
would
be
to
rest
structure
to
improve
the
control
plane
configuration
by
using
sub
structure,
because
because
now
in
the
cluster
configuration,
we
have
attributes
about
many
components,
but
they
are
kind
of
mixed
together.
So
and
we
are
clearly
sub
sub
structure
for
the
API
server
controller,
manage
configuration
and
scalar
configuration
and
scalar
configuration,
and
then
there
will
be
another
step
which
is
where
I
should
I
would
like
to
add
some
feedback
chattering,
its
lucky
to
find
out.
B
The
best
way
to
do
this
is
that
we
need
to
move
in
outside
the
cluster
configuration
the
set
of
configuration,
which
are
specific
for
one
API
instance,
our
API
server
instance-
and
this
can
be
done
by
nesting
into
the
nodal
restoration
which
I
don't
know.
If
it
is
a
good
solution,
because
no
the
restoration
was
designed
for
the
node
or
cubelet
can
be
done
by
creating
two
different.
B
D
B
D
A
Have
to
cut
us
off
because
there's
back-to-back
meetings
so
I
do
think
we
should
continue
this
conversation,
but
perhaps
we
should
do
an
ad
hoc,
maybe
set
up
something
if
we
need
to-
or
we
can
take
it
to
slack
for
the
time
being.
But
I
know
that
there's
another
meeting
that's
coming
in
now,
so
we
should
probably
close
up
now.
Okay,.