►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
C
A
So
first
well,
let's
fill
in
the
attending
with
what
yes,
so
first
we
have
sig
Federation
and
from
that
we
might
file
the
issue
that
Cuba
is
deprecated
and
we
should
move
everything
from
Custer,
even
plus
third
PC,
and
there
was
a
transformed
about
separation.
How
we
should
move
that
over
which
is
interesting,
so
I
guess
we're
going
to
talk
or
that
now
in
detail,
that's
matter,
and
so
she
came
here.
Okay,.
F
F
F
The
thing
or
what
is
required
to
move
current,
a
mechanism
of
the
need
of
the
clusters
to
qadian
based
deployment
method
for
Federation,
so
these
are
pure
smaller
tasks.
We
have
broken
down
like
to
make
like
the
progress
towards
this
target,
so
the
few
points
are
like
we
need
to
discuss
and
in
most
of
the
things
are
straightforward
and
I
think
current
the
target
that
we
are
using
in
testing
figc.
F
This
Radian-
and
most
of
these
you
may
feel
net
for
for
networking-
is
the
same
combination.
So
so
you're,
like
our
vision,
we
were
discussing
actually
me
and
Madhu
like,
should
be
bring
up
the
multiple
clusters
within
get
us
anywhere
or
with
the
of
within
Cuba
tests
like
external
to
ok.
It
is
anywhere
and
also
of
the
same
applies
to
the
Federation
kind
of
lengthening
up.
F
So
we
were
in
think,
like
we
should
do
the
multiple
slits
to
build
up
inside
the
resonator
like
another
competition
option
where
we
we
can
suggest
like
how
many
clusters
we
want
to
bring
up
and
then
based
on
that
we
choose
like
which
all
zones
OH
place
the
clusters
and
bring
up
the
cluster.
Okay,
that's
one
part
of
it
and
the
second
part
of
it
is
the
ring
of
the
federation.
Controlling
so
I
mean
I'm,
not
sure
whether
it
should
be
within
kubernetes
any
bet
or
it
s.
I.
C
Feel
like
both
of
those
things
should
not
be
interpreted,
ease
anywhere,
I
mean
maybe
the
Federation
control
plane.
There
could
be
option
for
that,
but
it
doesn't
really
make
sense
to
me
for
a
single
call
of
create
a
cluster
all
sudden,
create
multiple
clusters.
That's
pretty
easy
to
layer
on
top
of
it
and
create
multiple
clusters.
Parallel
from
what
test
yeah.
F
E
Emily
one
argument
that
I
had
was
terraform
already
provides.
Terraform
automatically
already
does
things
in
parallel,
so
I
was
suggesting
that
we
just
generate
the
required
JSON
file
for
or
change
the
JSON
net
file,
to
generate
multiple
of
these
things
and
generate
a
single
JSON
file
and
create
all
the
infrastructure
in
there
upon
performance
anyway,
going
to
create
things
in
parallel
and
that's
somewhat
easier
for
debugging,
especially
given
how
the
not
separated,
if
you
are
going
to
do
things
in
parallel
and
cube
test,
interleaved
logging
might
cause
problems
was
my
point
so.
E
F
C
So
it's
not
just
changing
the
terraform
that
right,
because
careers
anywhere
well
like
the
terraform,
you
could
say,
create
me
sort
of
three
copies
of
my
cluster
and
the
same
zone
are
different
zones.
But
there's
also
like
the
flow
through
the
following
steps.
Where
you
run
I
can
do
that.
Then
after
you
run
the
terraform
step,
but
you're
gonna
have
to
somehow
run
sort
of
three
different
cube
ads
and
swords
and
different
cube
admins
in
stage
two
of
kubernetes
anywhere
right,
yeah.
C
D
Yes,
so
we
start
up
scripts
for
the
VMS
that
phase
2
is
logically
separated,
but
an
implementation.
It's
actually
attached
to
phase
1
as
part
of
the
in
startup
script,
so
I
think
the
difficulty
there
is
going
to
be
creating
the
JSON
like
files
for
you
really
all
of
the
resources
that
you
need
and
I
don't
I,
don't
really
know
where
we
should
land
with
this
one
I'm
kind
of
on
the
census.
If
it
should
be
a
layer
above
take
anywhere
or
inside
it.
E
E
C
E
E
We
are
not
sure.
Okay,
we
click.
On
last
time,
I
tried
given
and
if
anywhere
it's
now
found
printed
at
blogs
period
on
a
blogs
depending
on
the
mode
I
think
there
are
different
loading
effects
against,
but
some
level
then
I
wanted
to
debug
it
printed
a
ton
of
blogs
and
shipping.
Your
things
was
already
difficult
or
one
cluster.
Okay,.
C
What
is
this
one
scale
right,
because,
if
you're
using
cube
tests
to
execute
terraform
resume,
where
you
can
redirect
this
log
from
each
separate
implication
of
terraform
the
different
files
or
you
can
capture
them
in
prefix,
every
line
I
mean
like
it
seems
like
there
should
be
a
way
to
wrap
it.
That
terraform
output
way.
E
E
Yeah
yeah,
that's
true,
be
running
in
AI
mode
auditing,
submit
more
where
we
don't
really
directly
access
to
send
it
out
for
those
Canada.
My
ology
is,
it
might
be
problematic
in
local
debugging,
but
again
I
don't
want
to
like
right
hold
on.
This
may
be
the
thing
in
Geneva.
You
know,
approaches
are
fine.
Okay,
yeah.
C
It
just
seems
like
a
bit
of
a
layering
violation
to
me
to
make
something:
that's
intended
to
deploy
a
cluster
all
of
a
sudden
have
logic
to
know
how
to
point
multiple
clusters
and
each
of
those
clusters,
presumably,
is
not
really
going
to
be
very
different
from
each
other.
Like
you
guys,
probably
one
just
like
deploy
three
of
the
same
thing.
Is
that
really
useful
to
someone
who's
just
trying
to
use
cuted
anywhere
as
a
direct
tool?
C
B
F
Wonderful
Oh
same
point
in
tradition:
we
are
having
is
like
current
bring
up,
is
kind
of
sequential
and
it
takes
a
lot
of
time.
I,
don't
50
minutes
to
blend
of
three
clusters
and
that's
why
we
have
alternative
mechanisms
to
recycle
the
clusters.
So
that's
one
of
a
pain
right
now,
so
we
probably
want
to
do
it
parallel
and
probably
also
want
to
easily
find
out
whatever
issues.
So
probably
logs
also
should
have
some
markers
to
unity,
things
for
which
cluster
it
is
yeah.
F
F
Networking
again,
okay,
I
just
realized
the
day
ago,
like
right
now
only
default
networking
is
using
to
see
so
there
is
no
provision
of
any
new
cursor.
If
I
specify
you
can
do
it
there
in
context,
I
can
specify
any
network,
it
won't
be
solvent.
So
that's
one
of
but
I
think
it
should
be
okay
with
production,
because
right
now
the
ete
scenario
or
the
Schism
calling
Cuba
test
is
assigning
the
network,
and
that
network
is
what
we
are
using
so
but
really
default
is
also
fine.
I
think
I.
F
C
F
E
No
just
be
comments
that
I
left
on
the
issue
the
other
day,
but
that's
advice,
but
you
can
bring
offline
and
it
seems
fine
right
now
so
yeah.
The
second
point
that
I
made
one
on
the
issue
was
already
discussed
now.
So
it's
all
this.
The
only
issue
that
I
want
to
discuss
right
now
is
the
cube
config
one.
Now
there
are
forms
on
base.
Eight,
your
conflict
and
I
think
it's
stored
in
a
temporary
directory
or
somewhere.
We
want
your
config
merging.
D
So
one
thing
that
you
should
be
aware
of
that
kind
of
a
gotcha
that
isn't
the
question
that
you're
asking,
but
a
different
question
is
as
serve
the
remnants
of
if
you've
looked
at
the
space
to
implementation,
what
ignition
which
predated
the
cube
admin
implementation?
The
terraform
configuration
was
populating
a
local
cube,
config
file.
If
you
probably
saw
that
that
file
is
actually,
it
won't
work
with
cube
admin
phase.
Two
so
cube
admin
generates
its
own
super
config
directly
on
the
master
that
we
then
have
to
fetch
as
a
secondary
step.
D
So
there's
an
additional
parameter.
You
can
pass
in
incriminates
anywhere
when
you
do
make
deploy,
you
can
say,
wait
for
queue,
config
and
it
will
actually
reach
out
to
the
master
via
SSH
and
then
grab
the
new
cube
root
file.
You
need
to
use
that
file
instead
of
the
one
that
might
happen
be
in
like
10
or
whoever
care
reforms
placing
one
locally.
So
if
you're
using
the
right,
one
was
what
kind
of
merging
are
you
trying
to
do
with
multiple
Utrecht
file
out.
E
E
So
we
could
be
surfing
Federation.
Obviously
we
have
multiple
clusters,
and
so
we
have
meditative
conflicts.
We
are
testing
relies
on
the
fact
that
these
are
all
in
the
same
queue
config
file.
They
actually
read
the
coupon
pick
file
in
our
desktop,
so
we
want
these
things
to
be
in
a
single
file
for
article
steps.
How
do
we
go
about
that?
I.
E
C
E
A
E
Correct
the
other
side,
that's
right
that
part
of
infrastructure
is
already
there
in
chip
desk,
so
it
today
trans
bring
up
my
big
up,
multiple
communities,
cluster
and
then
run
cupid.
So
that's
part
of
the
problem
is
solved
as
long
as
we
figure
out
how
we
bring
up
medical
justification
and
get
an
idea
bumpy.
C
E
D
E
So
so,
right
now
our
second
place.
The
second
plus
C
is
simple.
It
brings
up
all
the
courses
every
time
like
because
every
single
dream
it
runs
the
test,
but
that
super
slow.
Just
we
had
hard
time
analyzing
things
with
process.
Cratering
queue
up
to
knock
you
down
scripts,
so
many
things
in
videos
so
are
dense,
usually
take
about
80
minutes.
E
If
we
do
everything
as
in
bring
up-
and
you
don't
just
of
every
single
time
of
which
50
minutes
is
just
bringing
up
and
getting
to
tearing
down
clusters,
so
so,
and
that's
not
viable
for
presubmit
right,
we
need.
We
did
not
want
to
make
every
single
person
writing
a
TR
wave
for
background
just
to
bring
up
a
tent
or
cluster.
E
So
this
calm,
this
complicated
setup
that
we
have
is
we
bring
up
the
cluster
one
every
day
at
midnight,
Pacific
time
and
then
just
reuse
that
cluster
and
actually
bring
up
and
tear
down
Federation
control
in
every
single
time.
This
has
caused
issues
and
there
are
tears
which
get
merged
into
kubernetes.
That
goes
out
of
sync
with
Federation
control
plane
and
that
just
blows
it
up,
make
you
for
everyone.
So
with
this
migration
we
are
also
trying
to
solve
that
problem,
and
we
think
if
we
can
think
up
clusters
in
parallel.
E
A
E
E
We
think
is
finished.
Bringing
up
Federation
control
plane
is
really
fast.
It
shouldn't
usually
shouldn't
take
more
than
a
minute
or
two
so
yeah
that
that
time,
feed
America-
and
there
are
not
that
many
Federation
tests.
You
only
have
about
26
to
30
of
T's,
so
even
and
they
all
run
in
parallel.
So
it's.
E
F
A
D
F
D
F
E
C
Have
Thanks
oh
and
one
of
the
things
I
learned
yesterday:
there's
no
submit
queue
on
the
test
in
pro
repo.
So
if
your
PRS
for
that
repo
are
sitting,
please
folks,
someone
that
can
merge
them,
so
we
can
get
them
in
okay,
that's
reserved
like
Lu
Tien.
Now,
when
talked
to
Eric
theta
I
was
like.
When
is
this
going?
It's
like?
Oh,
you
have
to
click
the
button,
so
I
won't
push.
You.
D
C
A
A
C
Think,
maybe
for
the
the
UX,
it
would
be
useful
to
do
the
demo
during
the
broader
meeting
on
Tuesdays.
Let's
get
more
feedback,
I
think.
If
here
we
can
focus
on,
are
the
nitty-gritty
implementation
details
but
like
the
UX
is
something
that
everybody
who
uses
the
commands
we
have
to
deal
with
analogues
of
Queens,
for
that
would
be
useful,
I.
C
A
A
So
yes,
one
large
oversight
that
I
realized
was
that
we're
now
like
talking
to
the
gaps
and
want
to
have
this
new
upgrade
strategy.
But
it's
of
no
use
for
our
like
one
seven
to
one:
eight
migration,
because
one
one
seven
doesn't
have
this
field.
So
we
it's
it's
useful
press
in
one,
eight,
two,
one,
nine
migration,
so
the
way
I
implemented
is
now
it's
just
like
right.
A
The
new
manifest
to
a
temporary
directory
and
move
one
by
one
move
over
the
manifest
look
that
it
comes
up,
clean
and
then
go
to
the
next
next
one
and
then
after
the
static
for
manifest
or
upgraded,
it's
going
to
upgrade
the
self-hosted
but
I'm
not
sure
like
right.
Now
we
don't
strictly
need
self-hosting,
oh
like
for,
for
this
task
actually,
but
I
mean,
should
we
enable
it?
Let's
do
it
by
default
anyway.
I
mean
I.
Think
it's
good
to
be
like
future-proof,
but
yeah.
C
A
Well,
we
might
I
mean
I,
could
add
the
pork
for
upgrading
so
for
the
cluster
like
the
hacky
way
with
the
temporary
demented
thing,
I
could
add
support
for
it,
so
it
doesn't
error
out.
If
you
do
that,
but
still
I
mean,
should
we
do
it
as
a
post
upgrade
step,
or
should
we
just
wait
and
do
it
like
in
the
1
8
1
9
migration,
where
we
will
show
that,
like
we
have
the
right
of
great
strategy
of
the
demon
sex,
okay,.
C
I
would
agree,
I
would
tend
to
lean
towards,
for
now
you
don't
migrate,
people
to
self
hosted
and
the
ones
have
no.18,
but
with
1:8
we
start
creating
clusters
of
self
hosted
and
then
sort
of
1/8
forward.
We
keep
the
upgrades
of
self
hosted
people
working,
and
then
we
try
to
migrate
people
into
supposed
it
with
one
night.
Ok,
so
instead.
A
We're
deploying
self
hosted
clusters
by
default,
yes
with
one
yeah
that
sounds
good
to
me.
I
mean
then
for
people
that
are
like
I'm
doing
cube
at
a
minute
and
get
a
1
8
cluster
by
default,
using
CLI,
1,
8
and
then
I
want
to
upgrade
to
181
and
I
already
had.
It
have,
of
course,
the
cluster
and
it's
okay
to
like
I
the
upgrade
strategy
is
there
so
it's
easy
to
to
do
the
point.
Zero
two
point:
one
migration
yeah
that
sounds
good
to
me.
C
A
A
Yeah
I
mean
it
was
a
lot
more
cold
than
I
had
expected
I
when
I
actually
like
started
thinking
about
all
the
edge
cases
and
all
the
policies
women
are
in,
for
it
got
to
be
a
lot
of
code,
but
not
well
lucky.
Luckily
enough,
most
of
it
is
unit
testing
and,
like
I've,
tried
to
list
all
the
cases
possibly
think
of
in
there.
A
So
for
for
the
killer,
functions
like
one
of
them
is
like,
which
versions
can
I
upgrade
to,
but
that's
a
pretty
long
function
and
that's
400
light
immunity,
and
the
other
is
like
is
my
chosen
version.
When
I
do
apply,
it's
valid
can
I
like
actually
do
it
yeah
one
one.
One
thing
to
discuss
is:
should
we
support
reconfiguration,
I
support,
upgrading
with
just
a
new
config
file
and
the
same
version
right
now?
It's
it's
a
recognized
error,
errors
out
and
says
this
is
not
supported,
but
you
can
talk
this
with
Bester
force.
C
A
C
So
I
think
airing
out
for
now
and
letting
people
over
over
the
force
if
they
think
that
they
know
what
they're
doing
makes
a
lot
of
sense,
and
once
we
actually
have
upgrades
working,
we
can
go
back
and
rethink
if
we
want
to
use
sort
of
that
similar
mechanism
for
changing
parameters
exciting.
There
are
some
use
cases
where
that
makes
a
lot
of
sense.
H
C
B
Seems
I
know
you
were
getting
into
weird
territory
because
you're
literally
reprovision
agree.
Mr.
essentially
could
be.
H
B
Well,
there's
also
the
notion
that,
like
we
shouldn't
be
trading
too
many
costs,
we
should
try
to
strive
I
think
also
as
a
group
in
as
ecosystem,
to
not
treat
your
clusters
as
if
the
precious
ponies
sometimes
and
to
have
the
ability
to
migrate.
Your
workloads
off
it's
part
of
some
of
the
other
stuff
we've
been
working
on,
but
there's
a
the
the
notion.
A
lot
of
people
want
it
to
be.
B
They
have
all
these
scenarios
that
they
need
to
support
a
reconfiguration
or
doing
other
things
that
make
it
way
more
complicated
than
needs
to
be,
and
if
we
just
simplify
as
much
as
possible
and
say
like
okay,
let's
just
tear
down
workloads
and
either
federate
off
or
do
some
other
things
it
allows
them
to.
You
know
to
simplify
our
scenarios
to
makes
all
the
use
cases
much
simpler.
C
A
H
Even
I
do
think
that
managing
multiple
clusters
makes
a
lot
of
sense
in
the
Powell
when
I
did
closer
ops
before
that's
hope,
that's
what
we
did
basically
was
throwaway
clusters,
even
on
minor
version
upgrades
basically,
but
in
general
that
puts
us
on
a
risk.
So
taking
that
to
extreme
would
mean
we
don't
support
of
crates
at
all,
but
that's
a
really
dangerous
place
to
be
for
security
patches
like
you,
need
to
at
least
make
point
upgrades
like
upgrading,
urbanites
components
themselves
to
like
in
place.
They're
really
important
use
case.
I'm.
H
A
So
I
guess:
well,
it's
the
PR
is
larger.
Then
it
will
be
want
to
finalize.
Of
course,
there
were
some
like
refactoring
I
had
to
do
internally
and
move
around
stuff
in
order
to
make
it
reusable
from
like
the
upgrade
phase
of
things.
So
I'll
I'll
just
put
all
those
things
out
to
the
separate
logical
pl,
but
what's
ready
for
review
now,
it's
like
app
status
upgrade
and
CMD
upgrade.
D
Can
I
ask
what
level
of
feedback
are
you
looking
for
on
those
things
right
now
like?
Are
we
still
challenging
UX?
Are
you
just
making
sure
you're
on
the
right
track,
but
you're
still
going
to
do
a
lot
of
refactoring,
like
obviously
you're,
not
looking
for
nitty
gritty,
like
line
by
in
line
at
this
one,
because
you're
still
talking
about
refactoring
and
the
codes
going
to
look
a
lot
different?
So
what
kind
of
feedback
are
you
looking
for?
So.
A
A
A
B
There's
not
much
to
it.
Honestly.
I
went
and
I
talked
with
you
jus
this
last
week
and
we're
pretty
much
on
the
same
page.
The
she
wants
it
to
be
more
general
purpose
and
the
proposal
to
be
willing
mindfully
vague
on
what
could
potentially
be
check
pointed
in
the
future.
A
book
will
probably
have
a
phases
section
to
outline
that
we
were
planning
on
pods,
first,
obviously
and
then
potentially
moving
to
config
Maps.
B
G
B
Controversial
ones
are
consider
ups
and
Todd's
right.
The
controversial
one
is
secrets
to.
G
People
don't
think
the
transport
of
a
secret
leave
guys
over
to
the
cou
blips
is
actually
secure,
I
get
that,
but
if
it's
already
on
the
node
and
you're
expecting
that
secret
to
be
to
live
on
disk,
like
that's
the
option
that
we're
exploring
right
now,
it's
bit
too
Badman
is
going
to
put
secret
secret
data
on
disk,
and
all
this
is
the
only
change
here
is
that
that
data
exists
in
memory
and
that
we're
copying
it
to
disk
in
those
cases.
If
it's
on
disk,
that's
the
thing
work.
B
B
G
Fundamentally,
I
think
that
the
the
ideal
architecture
for
self
hosting
is
that
there
is
no
knowledge
of
the
application
that
is
going
to
be
running
on
the
host
that
lives
on
that
host.
So
fundamentally,
this
should
be
that
you
are
able
to
run
a
container
runtime
and
start
a
crew
it
with
enough
information
to
securely
contact
an
API
server
and
that's
it
and
after
that,
all
information
about
the
applications
that
are
run
on
the
host
are
distributed
as
kubernetes
objects.
G
So,
if
you're
running
an
API
server
that
the
end
goal
should
be
that
everything
that
it
needs
to
know
and
how
to
actually
run
should
be
just
objects
that
are
distributed
with
it
and
I.
Get
that
there's
like
concerns
about
distributions
secrets
and
such
but
I
feel
like
that's
almost
a
completely
separate
concern.
B
B
I,
don't
see
why
there
would
be
any
different
than
just
specifying
the
host
volume
amount
explicitly
because
you're
you're
knitting
on
a
subtle
detail
of
the
abilities
resource
object,
but
essentially
it
is
always
going
to
be
a
host
volume
knob
because
we're
not
going
to
transfer
we've
specified
or
these
other
people
have
specified
that
they
do
not
want
to
have
certs
and
keys
and
tokens
in
secrets
for
the
control
plane
right.
Well,.
G
So
if
you're
saying
with
the
host
volume
out,
what
is
the
way
in
which
you
want
it
to
be
distributed
to
that
node,
because
I
would
argue
that
telling
a
node
of
the
master
should
be
purely
defined
through
labels
and
tanks.
It
shouldn't
be
that
you
have
to
run
a
particular
tool
that
is
going
to
set
up
particular
things
on
that
host.
It's
an
argument
to
the.
B
B
G
Where
I
I
mean
I
just
have
a
I
think
a
pretty
based
disagreement
and
not
like,
and
the
idea
that
it
shouldn't
be
external
tools
that
we
have
all
of
these
distribution
mechanisms
of
applications
in
kubernetes.
And
if
we're
concerned
about
secrets,
we
should
6
secrets.
We
shouldn't
be
inventing
new
ways
to
distribute
you.
B
Know
I'm
not
I'm
not
going
to
see.
This
is
where
the
security
folks
in
this
big
off
have
very
strong
opinions
on
this
and
I.
Don't
ever
times
every
time,
I
Drive
into
it,
it's
partially
considered
Kabuki,
theater
and
I.
Don't
I,
don't
really
want
to
get
involved
in
that
game.
I
understand
your
point
and
I
actually
agree
with
it.
But
there's
a
part
of
me
that
just
says
I
want
to
listen
to
the
security
experts
and
not
do
the
things
that
they
cringe
on
at
this
time.
Well,.
G
So
I
think
that
their
concerns
are
not
ones
that
we
actually
have
to
solve,
because
essentially
using
or
not
using
secrets
would
become
optional,
but
their
concerns
are
about
the
distribution
of
Secrets,
not
about
whether
a
secret
already
is
local
to
a
node
in
memory
and
that
we're
copying
it
to
disk
I.
Don't
think
that
they
care
about
that,
because
if
it's,
if
terraform
is
copying
it
to
disk
or
CRD,
is
copying
it
to
a
disk
or
the
secret
gets
copied
to
a
disk
and
you're
optionally
like
you're
opting
into
that
behavior
I.
G
Don't
think
that
that
they
would
find
that
very
contentious
at
all,
I
mean
we
could
validate
that.
But
it's
just
backing
up.
This
is
that
my
I
thought
about
self
hosting
and
the
value
that
you
get
out
of
it
is
that
the
application
we're
going
to
run
don't
need
to
know
anything
about
the
host
I.
Think.
B
I
think
what
I
want
to
do
so
I
can
move
forward,
is
I
could
put
an
optional
portion
at
the
end.
It
said
that
secrets
are
still
a
item
that
requires
further
consensus
before
we
forward
on
it.
You
know
it
just
a
willfully,
vague
argument
that
says,
like
you
know,
people
are
thinking
about
it.
We
haven't
yet
decided
on
what
we
want
to
do.
H
This
the
gone,
what
okay
so
I,
think
part
of
the
confusion.
I
clarify
just
talk
about
CRD
the
use
case
for
CR
DS
for
self
hosting,
really,
basically
just
as
a
secret,
a
type
of
secret
that
is
easier
to
restrict
access
to
via
our
back
right,
like
it
would
basically
the
same
of
the
secret
I.
Think
that
part
of
the
proposal
of
sort
of
future
a
che
version
using
CR
DS
is
this
sort
of
fully
self
hosted
version
where
the
master
is
only
identified
by
a
label?
Certeyn
yeah
I
mean
I,
agree.
A
With
Aaron,
like
that,
this
is
then
go
I
mean
when,
when
you
now
explain
like
the
the
checkpointing
part
of
it,
I
think
it's
more
reasonable
than
I
thought
it
would
yesterday,
I
guess
still
I
mean
we
have
Tim
said
we
still
have
this
general,
like
we
shouldn't
put
my
cultural
plane
certificates
in
secret
recommendation
from
sigil
like
generally
and
I
mean
I'm
fine
with
that
I
think
that's
okay!
For
now,
but.
A
G
I'm
I
definitely
agree
that
the
phase
of
thing
like
doing
simple,
stuff,
first
and
then
kind
of
moving
the
needle
forward.
I
just
have
this
huge
concern
that
if
this
isn't
like
actually
discussed-
and
at
least
some
buy-in
initially
of
like
we
should
be
targeting
these
things,
I
just
the
next
release,
we're
going
to
have
another.
G
It's
going
to
be
another
battle
to
try
and
get
these
things
in
and
100%
tube
tectonic
will
just
not
use
kubacheck
pointing
right
at
all,
and
that's
that
that'll
be
the
reality
which
just
really
I
mean
I
would
be
really
really
bummed
on
that.
So
if
this
isn't
even
like
discussed,
then
it's
essentially
I.
Don't
know
two
three
four
more
releases
before
we'd
even
be
able
to
entertain
using
it.
B
Think
maybe
we
should
have
like
a
conversation,
perhaps
bringing
in
some
of
the
sig
note
folks,
I
think
all
apart
signal
it
and
sig
off,
maybe
to
the
next
meeting
of
this
group
right
then
to
Sarah
Lee,
the
broader
group
to
just
hash
out
the
final
a
shot.
What
would
exactly
want
to
do
in
this
cycle
right?
B
The
draft
is
up
and
it's
willfully
vague
right,
because
you,
you
jus,
specified,
wanted
to
checkpoint
a
lot
more
things
and,
to
be
honest,
I,
don't
care
I
just
want
to
get
this
done,
because
the
amount
of
time
I'm
spending
communicating,
is
far
greater
than
actual
code
right.
B
C
Yeah
Erin,
can
you
take
an
action
to
try
and
rope
the
appropriate
folks
into
the
meeting
next
week,
which,
by
the
way,
is
on
Wednesday
instead
of
Friday
I
could
see
I
think
we
have
two
and
a
half
minutes
left.
So
can
we
move
on
I?
Have
one
more
agenda
item
to
discuss
briefly,
which
is
we
have
a
new
person
on
our
team
in
the
Google
Seattle
office?
C
Who
is
starting
to
look
at
upgrade
tests
for
cube
admin
and
I
want
to
double-check
with
Lucas,
so
we're
deconflict
I
think
you've
mentioned
on
one
of
the
issues
that
someone
else
is
also
looking
at
this.
So
her
name
is
Jessica.
She
is
starting
by
trying
to
a
little
bit
of
automation
around
the
manual
upgrade
procedure
that
Jacob
put
in
place
for
the
one
six
two
one
seven
upgrades
once
we
have
that
in
place,
it
shouldn't
be
difficult
to
parameterize
the
upgrade
test
infrastructure.
A
From
retail
on
DNA,
oh
yeah
has
written
up
doc,
but
I
don't
think
he
has
written
any
code,
he's
writing
on
a
qubit
in
something
else
like
phases
right
now.
Let
me
see
if
I
can
grab
that
pretty
yeah,
so
it
basically
was
like
request
for
comments.
A
A
There
we
go
so
that
document
should
be
public,
I,
think
and
basically
outlines
some
possible
solutions,
and
he
looked
into
like
how
to
infra
would
be
changed.
I
would
do
it
pretty
easily
and
I
think
the
main
like
action
item
from
it,
was
to
be
able
to
run
arbitrary
scripts
or
commands
like
in
between
or
something
to
be
able
to
upgrade.
A
C
D
A
Sound
sounds
really
good.
How
I
mean
it
as
soon
as
my
upgrade
PR
is
merged
about
Queen
needs
to
get
PD
tested
as
soon
as
possible.
So
we
have
the
route
route
ready
by
then
I
mean
it's
going
to
probably
take
at
least
two
weeks.
I
expect
that
it
will
take
before
the
full
upgrade
the
iRacing
do
solo
review,
like
reviews
and
dependencies.
It
has
so
yeah
but
sounds
good
to
me
and
yeah
by
the
way
Jacob.
Could
we
schedule
the
like
get
the
no
testing
for
our
thing,
yeah.
D
Yeah
I'll,
post
in
slack
and
actually
interested
and
I,
would
love
to
do
a
brain
dump
with
everyone.
I
just
did
the
same
thing
with
Jessica
and
realized.
How
tangled
everything
is
so
every
time
I
try
to
write
it
down.
I
have
no
idea
how
to
organize
it,
but
if
it's
a
good
fourth
interactive
thing,
I
can
be
all
over
the
place.
Okay,.