►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
this
is
the
clock
lifecycle
meeting
focused
on
cubed
M
upgrades,
which
are
tokens
self-hosting
and
accessibility.
Today
is
the
16th
of
August,
and
we
have
a
couple
of
status
updates
and
things
to
discuss
on
the
agenda.
Yeah,
let's
get
started
so
first,
we
can
talk
about
cubed,
M
upgrades
and
I
currently
have
a
PR
for
review,
and
it's
too
big
I
know
that
we
should
discuss
how
we
can
layer
it
up
in
the
best
possible
way
and
I'm
really
happy
to
hear
your
question
subjective,
I.
Think.
B
Just
having
the
apply
first
as
the
first
one,
the
default
and
then
adding
the
other
options,
as
a
second
PR
seems
tenable
from
what
I've
seen
in
the
code,
but
I'm
open
to
other
ways
that
you'd
want
to
do
it.
But
right
now
you
have
to
eat
everything
all
together,
which
is
a
bit
much
to
swallow.
I've
made
the
same
comment
on
your
PR
team,
because
I
started
going
through
it,
but
then
there
was
like
comments
along
the
way
and
like
what
does
dry
run
and
yes
mean
together,
like
I,
was
confused
by
that
right.
A
So
it
was
more
like:
are
you
so
like
apt-get
or
whatever?
Like
always,
if
you
wanna,
if
you
do
an
upgrade
of
any
kind,
you
have
to
specify
like
the
white
flag.
If
you
want
not
interactively
otherwise,
it
will
like
go
as
far
as
like.
Okay,
this
version
is
okay,
I
I
know
the
cluster
is
healthy.
Everything
seems
fine,
but
then
ask
the
user.
Are
you
really
sure
you
want
to
do
this
and
let
them
type
in
yes
or
no
and
then
like
so
so
it
doesn't
like.
C
Think,
having
a
way
to
run
it
down
into
non
interactively
is
important.
There's
going
to
be
certain
environments
where
there
is
no
shell,
its
system.
It's
running
this
command,
not
by
putting
a
shell
on
the
master
and
actually
running
it.
They're
putting
it
in
to
AWS
has
a
command
schedule
where
you
can
run
a
command,
but
they're
not
going
to
be
interactively
able
to
I
think
we
do
that
somebody's
just
going
to
work
around
it
in
those
kind
of
scripts
to
you
know
like
yes
into
it
or
whatever.
A
C
C
The
place
I've
seen
is
go
through
several
iterations
terraform,
which
I
think
is
what
we
modeled
our
plan
verb
on
kind
of
yeah
IP
they've
gone
through
a
few
iterations
of
how
their
CLI
you
works.
The
latest
version
I
think
defaults
to
when
you
run
apply
printing
out
the
plan
having
you
confirmed.
So
you
don't
necessarily
always
run
plan
ahead
of
time
in
their
models.
You
can
run
just
to
perform
reply
and
then
either
the
default,
which
is
to
ask
you
or
you
can
force
it.
No,
which
means
I
just
want
to
generate
the
plan.
C
A
B
Is
there
there's
so
many
different
incantations
inside
the
repository
of
different
either
client,
stubs
or
API,
server,
stubs
or
hooks,
or
other
things
that
allow
you
to
do
this
in
the
testing
would
be
nice
to
have
one
canonical
example
and
realize
that
might
be
hard,
but
a
lot
of
the
testing
infrastructure
already
has
similar
pieces
of
this
Tom
Ida.
Okay,
I
could
point
you
to
specific
references
where
they
basically
start
out.
B
A
A
Try
run
and
people
just
like
print
a
fake
kubernetes
api,
kubernetes
service
object
and
a
fake
note
that
it
has
registered
and
these
things,
but
in
in
like
the
upgrade
scenario,
we
actually
want
to
target
the
actual
values
of
our
cluster
and
forward
just
get
and
leave
to
the
actual
client,
the
actual
API
server,
that's
running
somewhere
and
not
fake
them,
but
still
this
Caudill
modification.
So
this
was
basically
the
best
thing
I
could
come
up
with.
If
that's
a
like
canonical
way
to
do.
B
Let
me
take
a
look
back
through
your
PR
one
more
time.
The
dry
run
you
have
two
separate
PR
is
both
to
have
dry
run
in
it,
but
the
one
you
asked
me
to
review
yesterday.
I
started
taking
a
look
through
it,
but
the
context
of
having
that
piecemeal
integration
was
lost
on
me
when
I
was
reviewing
it.
Oh
yes,.
A
Sorry,
sorry
for
not
making
that
clear,
so
yeah
it
basically
I
mean
that's
one
function
get
try,
run
client
or
something
on
you.
Try
on
client,
where
you
will,
where
you
should
pause
interface,
that's
called
fire
on
getter
and
that
interface
can
be
used
like
there
are
two
implementations
of
that
interface.
One
is
for
in
it.
It
just
does
whatever
it's
whatever
is
like
required
to
get
in
it
working
and
the
other
is
like
actually
redirect
my
guest
list.
So
a
real
API
server.
A
A
Cool,
thank
you.
So
that's
one
of
the
independent
things
and
then
we
have
I
encountered
while
doing
the
upgrade
PR
some
weeks
ago
now
must
have
been
while
actually
coding
it.
The
first
time
I
encounter
encountered
race
conditions
in
the
self-hosting
code
that
basically,
when
we
first
we
we
create
the
new
like
sort
of
hosted
variant.
A
Then
we
wait
for
the
self-hosted
pod
to
come
up.
Everything
is
fine
so
far,
and
then
we
remove
the
manifest
from
the
like
the
static
pod
and
right
after
we
listen
for,
like
is
the
API
server
healthy,
but
there's
indeed
a
red
racer
condition
there,
where
the
ATI
server
when
we're
checking
it
is
healthy.
Well,
then,
we
check
the
old
one,
because
cubelet
is
slow
to
react,
hasn't
had
pint
like
remove
the
static
folding,
but
that
happens
like
in
between
we
check
if
the
API
server
healthy.
Yes,
but
it's
the
old
one.
A
B
A
A
Unfortunately,
they
bind
to
exactly
ensign
interface
and
all
the
okay,
so
I
mean
this
was
the
so
basically
I'm
checking
like
a
priori
static,
pods,
atamira,
pod
and
I'm.
Just
checking
that
when
this
mirror
pod
is
deleted,
I
know
the
cube.
Let's
have
some
time
to
remove
it
and
now
I
expect
the
self-hosted
want
to
come
up
in
like
four
or
five
10
seconds.
So
after
it's
deleted,
I
go
ahead
and
like
check
the
help
out
endpoint
of
the
real
API.
A
Yeah
so
the
whole
thing
we
have
access
to
kind
of
both,
but
although
the
self
of
the
one
reports
like
the
running
is
in
the
running
state,
but
internally,
it's
back
offing
like
oh
no
I
couldn't
bind
to
the
port.
Let
me
wait:
10
secs
again
and
I'll.
Try
again
then
like
oh,
no
I
couldn't
find
this
I'm
either
then
I'm
like
20
seconds
and
on
a
given
amount
of
time,
I,
don't
know
exactly
what
that
pine
is.
It
will
like
fail
and
exit
one
or
whatever
and
then
yeah
to.
B
C
B
C
A
E
B
E
Come
back
with
the
detail,
I
mean
it
depends
on
what
you're
actually
looking
for
I
guess.
It
would
come
back
with
that
detail,
but
the
actual
status
of
that
pod,
whether
it's
actually
running
or
not,
might
actually
be
pretty
stale,
because
you
that
endpoint
doesn't
update
itself
unless
it
actually
is
able
to
persist
it
in
an
api.
Bp
is
down,
but
it's
not
being
persisted.
So
you
don't
get
enough
State
if
you're,
just
looking
for
a
random
annotation
that
actually
should
be
okay.
B
A
E
A
E
A
We're
just
like
we
start
the
new
thing.
We
wait
for
the
new
source
of
the
pod
to
be
running.
Then
we
delete
the
static
port,
and
now
we
wait
for
the
static
port
to
be
deleted.
Then.
Finally,
we
check
that
the
healthy
endpoint
of
the
new
API
so
must
be
okay,
and
then
we
go
to
the
next
one:
cube,
controller
manager
and
so
forth
and
after.
A
F
A
Have
another
like
aware
on
this
topic:
we
have
another
fun
right
condition
with
a
full
line
like
comment
which,
which
Matt
actually
commented
on
I,
think
like
good
documentation
or
something
well.
So
this
is
the
case
where
we
do
the
static
code.
Switching.
So
let
me
just
like
cover
that
quickly
how
the
upgrade
works
when
upgrading
a
static
podcaster
we
write
new,
manifests
to.
A
Attempt
three:
so
we
have
like
everything's
in
the
temp
directory
and
then
we
take
one
of
the
running
like
the
running
API
server
static,
pod
place
it
like
rename
it
another
temp
directory,
that's
called
backup,
and
then
we
move
the
new
one
to
the
actual
path
and
that
will
make
cubelet
like.
Oh,
this
is
not
the
state
I
had.
I
will
restart
this
new
pod
and
the
new
api
so
will
come
up
in
exact
created.
One
will
come
up
in
submit
Minnesota
a
second
sorter,
and
then
we
proceed
with
the
controller
manager.
A
Let
me
see
here
so
the
tricky
tricky
thing
here
is:
oh
no,
it
yeah.
It
was
Jamie's
that
commented
on
the
comment:
if
I
have
a
function,
called
wait
for
static
code
control,
plane,
hash
change,
so
what
I'm
basically
doing
is
I'm,
taking
a
hash
of
the
port
object
that
was
running
the
static
pod
object
that
was
running
before
and
when
I'm.
This
is
again
because
we
don't
have
many
good
ways
to
do
this,
so
let's
say
we're
on
170
and
we
want
top
grades
1-8.
A
A
Instead
of
like
getting
for
it
to
get
deleted,
it
won't
get
deleted.
Instead,
we're
waiting
for
the
hash
hash
of
the
static
component,
the
mirror
pod
to
change,
and
then
we
know
that
this
has
actually.
This
one
has
actually
come
up
as
a
new
one,
because
again
we
it's
really
hard
to
get
the
identity.
I
mean
I.
I
would
have
loved
to
use
you
IDs,
but
it
turns
out
after
some
testing
that
cubelet
will
not
update
new
ID.
A
After
just
an
image
version
change,
it
will
use
because
it
has
the
same
name
so
I
think
it's
actually
the
same
pod
and
getting
this
everything's
the
same.
So
that's
why
I
went
with
with
the
hash
and
then
we
can
proceed
otherwise,
we'll
get
into
the
same
race
condition
where,
where
we
don't
restart
before
we
proceed
and
yeah,
so
that's
basically,
basically
it
if
we
at
any
time.
So
we
have
the
1/7
manifest
in
a
backup
directory.
A
B
Wants
to
have
like
some
it's
part
of
the
document
that
you're
working
on
to
have
known
limitations
or
something
that
exists
regarded
or
known
issues
or
known
conditions.
Just
because
like
right
now,
if
the
code
reading
it
through,
unless
you
have
context,
will
be
difficult
to
fully
understand.
But
if
it's
written
down
in
some
tomb
of
knowledge
either
and
a
readme,
a
readme
would
be
fine.
B
A
B
It
works
like
in
here
all
gotcha
you're
here,
here's
the
flow
conditions
that
it
goes
through,
because
right
now,
I
think
there's
there's
way
too
much
magic
boiled
into
your
brain
from
experience
that
people
trying
to
understand
it
unless
they
walk
through
all
the
code
and
do
it
multiple
times
would
have
no
idea
why
it's
that
way,
right,
like
you,
us
spending
earlier
10
minutes
regarding
why
you
need
to
have
that
time
out
block
without
having
it
actually
documented.
You
know
somebody's
be
randomly
walk.
You
through
the
code
would
be
like
why
we're
doing
this.
A
A
Yeah
I'm
gonna
do
that
for
now,
I,
don't
think
I'll
have
time
to
like
upgrade
the
dock
with
older.
All
the
implement
implementation
details
like
the
design,
dock
or
proposal
before
the
freeze,
but
I
mean
I,
guess
that's.
Why
we're
here
to
like
I'll,
definitely
do
my
best
undocumented
in
the
code
and
like
being
really
transparent.
With
all
these
details
for
these
two
weeks
that
left,
then
we
can
like
merge
it.
It's
working
well
and
then,
like
document
for
whoever
reads,
read
this
in
view
in
the
future.
A
And
yeah,
so
we
have
those
up
to
like
tricky
ones.
Then
we
have
a
third
which
is
or
action.
Actually
we
have
two
two
more
which
are
heavily
unit
tested.
I'm,
really
glad
that
go
makes
it
easy
to
see
unit
tests.
In
this
case
one
is
like
we
won
apply,
we
we
give
a
version
and
then
we
have
to
make
sure
it's
all
policies
like.
Can
we
upgrade
to
this
version
or
effect,
and
there
was
many
more
when
I
actually
started
coding
this
and
implementing
implementing
everything.
A
There
were
many
more
like
edge
cases
than
I
had
thought.
So
that
would
be
like
it
in
things
like
just
like.
If
we
are
one
one,
seven
and
specify
I
want
to
upgrade
to
one
nine,
it
should
rather
say:
well,
we
don't.
We
support
only
one
minor
version
at
a
time
upgrade
first
one
eight
and
then
one
one
nine.
So
that's
one!
Then
we
don't
support
downgrade.
A
So
if
it's
like
I'm
on
172
and
I
want
to
upgrade
to
172,
it
will
say
no,
you
shouldn't
do
this,
we
don't
know
what
will
happen
but
specify
the
full
flag.
If
you
are
really
sure
you
want
to
do
this,
one
case
where
it
will
be
really
useful
is
that
it's
ad
impotent
the
command
behind.
So,
let's
say
you're
upgrading
from
one
seven
to
one
eight
and
your
machine
shuts
down.
A
While
it
transitions
these
static
pod,
then
you're
kind
of
screwed
I
mean
you
can
have
one
upgraded,
API
server
and
but
the
controller
mention
the
rest
of
the
cluster
isn't
and
then
kubaton
will
say
to
you
that
oh
your
cluster
is
one
eight
already
you
can't
upgrade,
but
then
we
can
use
the
four
slides
just
like
for
that.
True
and-
and
it
will
eventually
work
so
those
policies
then
like
validate
that
your
cubelets
aren't
too
old,
make
sure
that
alone
yeah
all
these
kind
of
things
like
you
can't.
A
A
A
A
We
query
the
the
CI
system-
and
we
get
like
late
at
this
latest
table-
is
one
eight
three
and
I'm
on
like
one
one,
seven
three
then
I
should
and
the
latest
one
seven
version
is
one
seven
five
then
I
should
show
to
upgrade
possibilities,
2005
and
or
two
one,
one,
eight
three
and
all
these,
and
it's
also
heavily
a
unit
tested
to
make
everything's
everything
works
there.
As
expected.
A
Pretty
easy
to
just
do
the
CLI
first,
if
you,
if
you
think
that's
good
I,
mean
just
like
Stubbs
everything
that
I'm
now
I'm
now
going
to
apply
and
then
I'm
doing
nothing
just
exiting,
but
we
will
have
the
like
Cobra
command
there,
if
you
I
mean
whatever
whatever
works
for
you
in
laning,
letting
this
tiara
I
mean
I've
already
plated
up
in
like
seven
eight
ten
post
PRS
so
but
it
is
I
mean
it
was
about
three
thousand.
So
needs
needs
yes
much
more.
A
A
But
I
mean
I
could
start
with
I
mean
that's
easy
for
me,
like
just
pick
up
the
like
CMD
upgrade
and
just
submit
that
it
doesn't
do
anything.
Just
has
the
right
flags
and
all
that,
and
then
we
have
three
hundred
lines
of
less
generalized,
less
code,
three
new
for
the
actual
implementation.
That's
in
paces
upgrade
that's
the
actual.
All
that
does
something.
A
But
yeah,
so
that's
what's
up
Christ,
yeah
and
I'm
trying
support
we
have
to
like
see
if
there's
something
in
tree
that
can
be
used
already.
Otherwise,
I'm
like
advocate
for
merging
this
and
like
improving
later
whatever.
If.
F
C
C
A
So
basically,
we
we
need
this
like
field
to
be
bulleted
in
the
Authenticator
to
get
the
data
and
it
should
default
like
if,
if
alt
groups
or
whatever
like
on
the
secret,
whatever
key
you
have
is,
is
a
male
value
like
empty
or
not
existent.
It
should
default
to
system
boost
rapids
like
what
what
I
do
now
yeah
and
then
you
can
I
mean
comma
separated
list.
It's
probably
enough
structure
like
and
everything
has
to
be
on
the
system.
Boot
rapid
:,
whatever
your.
A
A
This
is
the
bootstrappers
that
will
make
the
approver
approve
every
node
client
csr
coming
authenticated
from
the
system,
bootstrappers
group
right,
but
we'll
change
this
already
in
one
eight
one
eight
make
only
like
you
system,
boot,
wrappers,
:,
cube,
ATM,
:,
node,
or
something
well,
we'll
just
change
this
string
value
and
the
binding
so
that
level
that
will
work
pretty
well,
I.
Think,
okay,.
C
A
So
basically,
like
just
a
straw,
man
is
like
well,
it's
really
useful
to
have
a
separate
identity
for
the
csr
API,
of
course,
so
we
can
get
it
real,
real
credential.
But
it's
kind
of
equally
word
for
this,
like
just
get
my
files
over
okay
to
like
just
make
an
arbor
crew
like
the
system,
booster
per
cube,
a
DM
master
can
access
the
CSR
the
TRD
right
now.
C
There
could
be,
there
could
be
some
benefit
to
stilt
it
like
pivoting
through
a
CSR,
because
you
could
get
an
individual
identity
for
that
node
instead
of
the
bootstrap
identity
for
the
whole
pool
of
eventual
new
nodes,
yeah
I
think
yeah
I
mean
one
common
use
case,
for
this
might
be
that
the
bootstrap
token
for
that
node
identity
might
actually
be.
You
might
actually
have
individual
bootstrap
tokens
for
each
if
you
have
like
a
fixed
number
of
master
nodes,
exactly.
A
So
that's
that's
really
like
actually
looks
possible
already,
but
yeah.
So
also
it's
like.
If
you
create
any
token
butcher
token
right
now,
it
will
be
authenticated
system
bootstrappers,
which
will
automatically
give
it
access,
although
I
might
not
want
to
use
the
CSRA
pif
anyway
or
I.
Don't
want
to
order,
prove
this
thing
or
whatever.
So
we
we
won't
like
differ.
What
cube
ATM
actually
uses
it
for
and
like
the
generic
way
like
if
boot
Cube
has
some
other
security
policies,
they
shouldn't
like
be
affected
by
what
we
do
right.
I.
C
Would
also
like
this
feature
to
work
to
unlock
multiple
worker
pools
that
end
up
with
different
identities.
So
if
you
have
like
a
worker
pools
that
are
in
different
parts
of
your
network,
like
internal
versus
edge
nodes
that
you
might
want
to
have
separate
workloads-
and
you
want
to
sort
of
eventually
run
different
workloads
on
those
that
are
isolated
from
each
other,
you
have
to
work
back
up
the
chain.
All
the
way
up
to
this
bootstrap
stub
gives
them
different,
bootstrap
tokens
so
when
they
bootstrap
they,
which
CS
are
marked
with
different
groups,
see
us.
C
Our
approver
then
needs
to
know
how
to
like
treat
those
groups
differently
inside
different
kinds
of
certificates.
For
each
group
and
whether
that's
just
the
name,
certificate
or
or
something
else
is
one
of
the
groups
they
get
added
into
the
certificate
identity,
and
then
that
needs
to
go
into
the
new
sets
of
alkyls
on
the
node
object,
which
takeoff
is
talking
about
now
about
yeah,
researching
how
much
control,
of
instance,
a
node
has
over
its
own
node
object
in
terms
of
labels
is
like
that.
A
A
C
Other
issues
here
in
the
bootstrap
plugins,
the
next
one,
was
whether
we
should
so
right
now
in
Brewster,
going
to
come
through,
the
authentication
chain
is
sort
of
get
passed
through
all
the
token
authenticators.
The
bootstrap
tripping
Authenticator
is
one
of
those
and
it
would
be
a
little
bit
more
precise
if
we
had
a
prefix.
C
So
this
is
not
so
it's
not
a
huge
performance
improvement,
necessarily
because
this
is
out
of
all
cached
in
memory
and
it's
not
necessarily
a
security
thing
either,
because
again,
if
it's
not
a
valid
token,
is
not
a
Vella
token
you're,
just
saving
you
some
lookups
on
your
cache,
so
I,
don't
so
that's
wrongly
about
this,
got
a
kind
of
a
mixed
herbs
or
other
folks.
I'm.
Okay,
dropping
this,
as
you
are
yeah.
A
So
after
like
seeing
the
actual
plant
as
Joel
so
said,
I'm
nobody's
excited
about
that
migration
plan
of
like
to
your
early
releases
together,
work
I,
think
definitely
I.
Take
Jacobs
approach
form
from
yesterday.
I
think
that
was
good
like
we
have
a
really
strong
champion
like
this
needs
to
get
done
like
when
in
doubt
right.
A
So
if
we
have
someone
really
champion
it,
we
can
consider
it,
but
like
Jordan
is
like
mesh
and
myself
and
Joel
and
everyone
is
like
well,
it
would
be
nice
from
a
correctness
perspective,
but
it
would
need
a
lot
of
extra
effort
and
like
actually
no
really
performance
gain.
I
mean
not
many
nanoseconds
of
something
to
check
whether
it's
like
this
of
this
length
like
water
is
like
17
chopped
and
like
matches.
A
A
A
It
would
be
nice,
but
not
that
much
then
we
have
enabled
without
token
control
it
by
default,
and
we
this
is
like
a
steering
committee
eating
or
singing
architecture
and
I.
Don't
know
really
how
to
go
from
here.
I
mean
there.
There
is
resistance
against
and
I
mean
I'm.
At
the
same
time,
I
appreciate
and
respect
the
resistance.
A
The
risk
I
mean
we
don't
want
to
blow
the
call
too
much
level
and
either
want
like
add
things,
will
we'll
never
use
and
enable
that
by
default,
it's
just
that
I
think
well,
and
it's
also
like
the
butcher.
Token
Authenticator
isn't
either
enabled
by
default,
and
it's
also
an
edge
case
where
all
the
other
authentication
modules
are
enabled.
If
you
have
specified
the
file
right,
like
basic
chord,
for
example
or
token.
If
you
have
to
see
a
C
file
there,
it
enables
the
module,
but
we
shot.
A
A
Think
that's
that's
going
to
go
pretty
smoothly.
I
mean
my
two
line.
Changes
to
enable
the
controllers
is
non-trivial,
but
I
think
that
one
will
be.
We
basically
just
have
to
make
sure
we
don't
break
existing
cluster.
So
like
one
one
period
where
the
experimental
boosts
up
token,
like
four
one,
eight
I
think
the
experimental
bootstrap
token
flag
should
take
precedence
over
like
the
new
one,
enable
or
something
like
that,
and
then
we'll
mark
it
deprecated
and
removing
one
nine
doctor,
son
good,
and
we.
A
C
A
B
The
checkpointing
that
I'm
going
to
just
push
the
PR
in
this
week
as
well
as
the
docs
community,
PR
I,
expect
expect.
There
will
be
peripheral
from
some
folks
but,
to
be
honest,
like
I,
think,
there's
there's
some
massive
levels
of
miscommunication
going
on
I'm
to
the
point
where
it's
not
it's,
not
productive,
right,
I
waste
AB
smacks
of
time,
so
I'm
just
going
to
push
it
and
we'll
sort
it
out
the
top
with
you
and
done
at
this
point
well.
B
B
How
lately
are
in
the
cycle
with
everything
which
I
probably
should
have
pushed
everything
earlier,
but
you
know
hindsight's
2020
I
wanted
to
get
buy-in
from
people
which
was
a
huge
mistake
by
retro
stuff,
so
I'm
just
going
to
go
forwards
and
get
it
done,
yeah
and
get
the
initial,
because
all
we
need
is
something
very
simple
and
it
could
be
expanded
upon
and
scoped
and
there's
nothing
really
earth-shattering
about
it.
So.
A
E
He's
been
stuck
on
some
internal
release
stuff
so
past
couple
weeks,
but
he
started
looking
at
it
this
week
and
I
think
the
last
time
I
spoke
to
him.
He
said
you
should
probably
have
something
by
end
of
week.
He
was
asking
me
about
like
something
to
be
reviewed.
He
was
asking
me
about
doing
a
design
doc
because
in
the
futures,
repo,
that's
one
of
the
checkmarks
I
told
him
from
now
just
do
code
first
and
yeah.
F
A
E
A
A
That
song,
funk
we're
good
yeah,
I.
Think
I'm
in
here.
One
minute
left
think
that
well
one
thing
we
should
all
I
mean
I'll
send
a
PR
about
enabling
cell
posting
by
default.
It's
a
one-line
change,
just
like
fall
through
and
it's
I
don't
think
we
should
merge
it
now,
because
we
we
have
to
make
sure
like
everything.
It
is
the
basically
the
last
thing
we
enable
the
for
the
freeze.
If.