►
From YouTube: Kubernetes Kops Office Hours 20181012
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
it
is
Friday
October
12th.
This
is
cups
office
hours.
We
do
have
a
couple
of
items
or
are
a
number
of
items
on
the
agenda.
Please
do
add
your
name
and
any
agenda
items
you
like
to
cover
to
the
agenda.
Otherwise,
as
we
do
have
a
bunch
of
things
on
there,
I
suggest
we
get
started
Chris.
It
looks
like
we
had
some
news
or
Christian
I
should
say
I.
Think
I
didn't.
B
A
B
I
got
a
response
from
Nikita
and
Paris
and
what
was
the
name
again?
Oh
you
know
from
CNCs.
So
apparently,
as
we
are
just
like,
no
we're
not
upstream
project,
you
could
say
they
said.
Cnc
is
not
going
to
sponsor
the
intern,
but
they
said
well.
Why
don't
you
get
AWS
sponsor
it?
I
figured
getting
a
W
sponsor
the
person.
Implementing
SEO
is
probably
not
going
to
happen,
but
in
general.
B
Well,
if
we
find
a
company
who's
actually
sponsoring
the
intern,
that
would
completely
be
sufficient,
but
they
also
recommended,
like
take
a
more
generic
issue,
to
really
find
someone
who
also
could
be
interested
in
doing
this,
because
if
I
got
no
response
whatsoever
on
slack
I,
pretty
much
lost
everything.
So
far,
I
didn't
like
really
follow
up
on
that
as
for
now,
but
if
you
have
new
updates
or
things
feel
free
to
like,
let
me
know
and
I
can
push
it
forward
with
CNC
and
all
the
other
people
involved.
A
Sorry,
you
definitely
asked
AWS
and
we
can
also
like
work
on
the
more
generic
projects.
I
think
it's
good
feedback,
and
maybe
we
don't
get
it
this
time,
but
yeah
certainly
there's
another
iteration.
Sorry
I
missed
that
on
on
sacks
I
will
I
will
try
to
figure
out
what's
going
on
and
took
some
people
pink
lounge,
maybe.
A
I
mean
definitely
ask
you
know
if
anyone,
if
anyone
has
any
contacts
or
any
similar
intern
type
projects
that
they
would
like
to
see
happen,
then,
please
to
think
about
whether
how
which
is
an
appropriate
vehicle.
I
guess
and
thank
you
for
thank
you
for
pursuing
that.
That's
even
if
it
didn't
work
out
or
hasn't
worked
out.
Yet
it's
it's
great
to
do
that
you,
the
next
element.
The
agenda
is
also
yours.
Chris
release.
A
To
happen,
I
so
I
think
we've
I
think
we've
basically
neared
most
of
the
sort
of
critical
PRS.
So
I
suggest
that
this
we
should
cut
111
alpha
1.
Unless
anyone
is
aware
of
anything,
the
there
is
the
spot
into
PR,
which
I
think
is
basically
ready
to
go
in.
So
there's
a
one
issue
around
licensing
of
the
library
over
library.
They
added
just
me
need
to
make
sure
that
it
has
a
license
on
it.
It's
nothing
serious
I,
don't
think
so.
A
That
PR
looks
like
it's
in
great
shape,
so
prob
about
make
one
off
111
off
one
that
would
hopefully
make
111
alpha,
2
or
beta
1
and
which
you
know
we
tend
to
do
fairly
fast.
Once
we
start
that
the
release
is
going
so
yeah,
the
the
there
I
see
it
later
on
the
agenda,
so
I
want
to
preempt
it,
but
yeah
there
is.
There
are
still
ongoing
or
they're
still
like
areas
which
don't
work
with
Etsy
D
manager,
so
like
calico,
for
example,
but
why
don't
we
talk
about
those
I?
Don't
think?
A
That's
a
blocker
in
111,
we're
not
gonna,
make
a
CD
manager
required
or
even
I
think
we're
gonna
make
it
we're
gonna
turn
it
on
by
default
in
112.
So
it's
still
gonna
be
opt-in,
so
I
think
that
the
I
don't
think
those
those
issues,
a
nursery
blockers
for
111,
but
obviously
we
do
want
to
want
to
fix
them.
We
could
talk
about,
like
the
strategies
have
to
do
that.
I,
don't
know
if
anyone
else
is
aware
of
any
other
big
like
issues
of
that
nature.
C
A
B
A
B
A
And
yeah
we're
some
yeah
I.
Think
that's
I
think
that
we
get
1:12.
I
would
like
to
get
the
bundles
type
functionality
into
1:12
so
that
we
can
I
think
that
will
help
our
velocity
sort
of
it's
a
big
change,
but
I
prefer
the
scene
that
we
don't
have
to
change
cops
as
much
I
mean.
Maybe
don't
have
to
deal
with,
quite
as
many
PR
is
to
like
rev
everything
but
I.
A
Oh
and
the
next
item
on
the
agenda
is,
is
so.
If
no
one
objects,
then
I
will
try
to
cut
111
now
for
one
this
weekend,
yeah,
ok,
good
Yasim!
We
can
talk.
We
can
talk
more
about
the
the
kalakashi
later
on
I
guess
or
the
yeah,
the
sed
manager
calico
issues
later
on,
and
maybe
we
revise
that
our
dis
is
that
decision,
but
I
think
I
think
we
should
proceed.
A
A
A
They
I
I
wrote
up
a
straw
man
on
how
what
we
should
do
for
the
cops
wineries.
Now,
that's
a
relatively
different
use
case
from
what
they're
considering
to
date,
which
has
primarily
been
around
thinking
about
images.
Unfortunately,
the
their
their
meeting
is
happening
right
now,
I
still
in
conflicts
but
I've,
I
bugged
them
in
slack
and
told
them
shamed
them
tried
to
shame
them
into
saying
that,
like
I've
thought
everyone
in
this
meeting,
that's
their
that
they've
agreed
to
it.
Basically,
so
we'll
see
what
they
actually
what
they
actually
do.
A
But
there
is
a
strong
men
document
there
about
how
we
can
basically
set
up
some
set
up.
Some
buckets
in
s3
and
in
GCS
in
the
CN
CF
accounts.
How
a
release
would
work
without
requiring
sort
of
in
a
more
automated
way.
How
promotion
would
work?
There's
they're
talking
about
an
image
promoter
that
essentially
relies
on
like
a
manifest
that
is
controlled
by
a
git
and
is
approved
as
a
normal
PR.
A
So
a
release
would
be
done
via
a
PR
that
says,
release
cops,
111
alpha
one
and
here's
some
other
stuff
and
in
our
case
I'm
suggesting
we
put
in
some
hashes
of
what
one
eleven
alpha
one
is
and
then
a
different
person
would
approve
the
release
and
then
the
the
box
would
actually
go
and
copy
the
artifacts
around
and
make
the
race
happen
as
it
were.
So
it's
actually
not
terribly
different.
A
We
have
today,
except
that
it
would
be
done
by
bots,
and
there
would
be
this
sort
of
audit
trail
rather
than
being
done
by
by
me
or
by
me
with
someone
else
with
which
I
appreciate
but
yeah.
It's
not
it's
not
great
to
have
like
a
single
point
of
failure
there
and
and
we're
also
trying
to
move
to
or
I'm
suggesting
we
move
to
trusting
locations
less
and
working
more
on
hashes.
It's
a
difficult.
We
have
a
lot
of
that
infection.
A
I
think
ops
today,
but
we
still
like
going
like
fetch
the
hash
from
unknown
s3
bucket,
for
example,
so
the
root
of
trust
is
sort
of
tricky,
but
if
we
can
get
this
figured
out,
we
can
also
like
make
it
really
easy
to
run
your
own.
Your
own
cops,
mirror
or
air-gapped
and
also
multiple
mirrors.
So
we'd
have
you
know
s3
and
GCS.
So
if
it
happened
at
s3
or
GCS
went
down,
you
could
fall
back
to
the
other
one
transparently.
How
useful
that
would
be
for
cops.
A
Certainly
a
that's
right,
but
like
right
now
we
pull
we
pull
node
up
from
yeah,
no,
no
you're
right,
you're
right,
yes,
we'd
be
actually
that's.
That
is
that
is
true,
and
one
of
the
things
I
think
I,
don't
know
if
you
saw
the
work.
That
gamble
did
to
do
like
the
note
of
authorizer
stuff
I'm
wondering
if
we
can
put
some
more
information
in
that
channel
and
maybe
have
the
nodes
not
have
to
pull
from
s3
but
anyway
longer
term.
A
D
A
A
But
yes,
it
would
be
great
if
those
am
eyes
were
were
run
by
CN
CF,
that's
another
good
one,
and
then
the
other
ones
are
the
the
am
eyes,
the
images
themselves
and
the
binaries.
Those
are
the
three
I
think
those
are
the
three
things
that
we
rely
on.
Oh
and
the
fourth
thing:
that's
right
is
the
channels
files.
So
currently
the
channels
files
are
less
important,
but
they
may
become
more
important
in
the
bundle.
A
E
Yeah,
so
I
I've
been
a
happy
user
of
cops
for
a
long
time.
Now
it's
it's.
How
I
I've
deployed
my
AWS
committees,
clusters
and
I'm,
just
kind
of
hitting
some
issues
recently
with
you
know
top
networking,
so
I
just
kind
of
had
some
issues
with
you
know
we're
on
like
a
Ruby
on
Rails
application
with
couchdb.
E
All
that
manages
that
Cooper
days
every
once
in
a
while.
They
just
means
application.
Pilots
can't
reach
the
database
pod
and
we
get
like
either
DNS
errors
or
on
IP
unreachable
errors.
And
this
you
know
I,
don't
know
if,
if
this
is
a
cops
thing
that
I
need
to
like
I
need
to
know
more
so
yeah,
it's
kind
of
like
a
user
question
like
how
do
I
debug
this
is
there
like
a
cops
approved
way
of
capturing
this
kind
of
debugging
information,
yeah.
A
There's
definitely
a
kubernetes
like
service
Diagnostics
page,
which
is
pretty
good,
which
talks
about
was
very
good,
which
talks
about
trying
to
find.
It
may
go
find
it
later,
but
it
talks
about
how
you
can
like
narrow
down
whether
it's
effectively
your
pod
pod
working
or
your
service
Slayer
or
cube
Roxy,
or
whether
it's
something
else
from
what
you've
said.
You're
using
cube
net
right
using
the
default
networking.
Yes,.
E
A
A
There
are.
There
are
known:
some
people
have
have
observed
a
kernel
bug
with
iptables
like
a
connection
tracking
collision.
I
guess
I
would
describe
it
as
I
personally
I'm
working
to
reproduce
the
DNS.
The
symptom
is
that
you
see
DNS
resolution
taking
much
longer
than
abnormally
long,
like
thousands
of
times
longer
as
five
seconds.
Well,
it's
not
just
five
seconds.
But
yes,
oh
yeah,
there's
multiple
yeah!
There
are
multiple
tears,
but
yes
like
something
goes.
A
Yunus
is
like
not
working,
basically
force
it
like
in
bursts,
and
it's
already
or
like
occasionally-
and
it's
not
clear
why?
Well,
yes,
but
so
that's
not
the
service.
The
IP
thing
you
described,
which
is
so
let's
come
back
to
that
one
in
a
second
okay,
okay,
but
then
the
the
answer
to
that
one
is
honestly:
we
don't
have
a
great
solution
today.
Other
than
scaling
up
does
seem
to
help
a
little
running
it
locally
on
every
pod.
A
A
The
other
the
other
known
problem,
like
definitely
known,
is
if
UDP
UDP
isn't
reliable
and
if,
if
UDP
packets
drop,
you
will
see
longer
timeouts.
So
the
theory
is
that
maybe
for
some
reason,
UDP
is
less
reliable
in
communities
doesn't
make
a
ton
of
sense,
but
that
the
the
idea
is
that
by
running
a
local
agent
you,
the
back
hall
from
your
node
to
the
central
service,
would
be
doesn't
have
to
be
UDP
and
UDP
on
a
single
node
should
be
reliable.
E
Stability
issues
when
we
run
we
run
kind
of
a
crappy
multi-tenant
system
where
we've
got
this
like
web
server
application
and
then
for
each
one
of
our
customers,
we
spin
up
a
new
pod
just
for
them.
So
that
means
that
when
we
do
a
release,
we
are
we're
doing
a
lot
of
stuff.
We're
spitting
up
a
lot
of
puffs,
and
if
you
know
the
pots
are
made,
you
know
they
got
health
checks
and
all
that.
E
E
C
Personally,
I
wouldn't
say
core
OS
I've
ran
into
a
lot
of
this
stuff.
People
in
this
call
project
see
me
talk
about
it
too
DNS
stuff.
We
switch
to
tcp-based
DNS
for
a
couple
of
our
services
that
have
issues
and
if
you
I
can't
find
the
issue
right
now,
but
we
there's
a
few
documented-
would
solve
this.
There
were
a
couple
really
good
blog
post
on
the
DNS
requests,
broad
and
forced
it
over
to
use
TCP
for
DNS,
and
that
is
slower,
but
it's
reliable
was
one
solution.
C
E
C
The
problem
was
I,
never
actually
did
an
independent
test
of
which,
which
was
the
fix,
the
whether
it
was
the
TCP
changed
for
the
actual
you
know,
underlying
I
I
was
I
had
too
much
on
the
plate,
so
I
made
at
the
exact
same
time,
but
we
just
never
had
more
timeouts
and
we
were
getting
all
sorts
of
random
and
you
know
I
had
checks
in
the
containers
ip's
as
well
as
the
DNS
stuff.
So
I
was
certain.
It
was
DNS
related
and
by
switching
from
outlines,
and
we
got
rid
of
all
the
VMS
issues.
E
C
D
C
H
I
also
sauce
the
link
to
the
github
issue,
which
contains
some
good
material.
I,
would
also
add
another
point
which
you
can
always
try.
If
you
try
to
understand
the
problem,
a
little
bit
better,
which
is
if
from
your
application-
and
you
don't
need
to
talk
to
another
communities
application,
so
you
don't
need
service
ApS
and
this
kind
of
stuff.
You
can
change
the
DNS
policy
to
default.
H
It's
funny
that
the
default
is
not
the
value
default,
so
you
actually
have
to
type
default,
and
this
will
essentially
not
talk
with
cube
DNS,
not
with
the
cluster
DNS.
But
we'll
talk
use
the
DNS
of
the
machine
where
it's
running
so
you
will
be
bypassing
cube,
DNS
cárdenas,
whatever
it
is,
and
I
feel
that
this
improves
the
tank.
So
if
you,
for
example,
you're
deploying
an
application
and
then
your
databases
you're
in
a
double
yeah,
so
it's
maybe
RTS
or
whatever
it
is.
You
might
not
need
to
actually
use
the
service
piece.
E
That's
interest
in
this
case
we
do
talk
to
both
an
RDS
database
and
to
kubernetes
couch
database.
It's
always
the
couch
database
that
has
the
connectivity
issues.
Is
there
any
way
that
I
can
instrument
the
kernel,
networking
stack
to
kind
of
like
capture,
ongoing
packet
drops
or
missing
stuff,
so
I
could
kind
of
correlate
a
storm
of
miss.
You
know
whatever's
going
wrong
with
you
know
a
pot
deploy
or
something
does
anybody
do
that
I'm.
A
Working
on
doing
that,
the
the
the
tool
that
that's
that's,
the
tool
that
I
think
I
need
to
use
is
called
TCP
dump
it's
basic
like
a
full
packet
dump
and
it's
gonna
produce
like
a
lot
of
data
and
I'm
gonna
have
to
try
to
figure
out
how
it
is.
My
theory
is
so.
My
working
theory
is
that
somewhere
in
between
the
pod,
TCP
dump
and
the
destination
pod
TCP
dump
the
packets
will
drop
will
be
gone
and
I.
A
E
I
E
E
A
Have
greased
me
and
the
other
thing
is:
if
you're,
the
TCP
IP
connectivity
type
issues
if
you're
using
cube
net,
you
might
want
to
just
double-check
that,
like
coop
controller
manager
isn't
like
outputting
errors
managing
the
route
tables,
you
know
more
than
like
50
nodes,
for
example,
just
like
have
a
look
at
the
route
table
and
the
console,
and
just
like
sanity
check
it.
I
guess
you
know.
Okay,.
E
Yeah
I
should
try
that,
oh
and
for
the
for
the
TCP
dump,
have
you
looked
at
something
like
a
packet
beep
right,
because
all
of
those
build
on
top
of
Limpy
cap
and
I
was
any
more
I
was
going
to
do
something
that
would
hold
the
kernel
for
the
networking
stack
information
because
doing
a
full
packet
capture
process
might
be
computationally
system-wide,
expensive,
I.
Don't.
A
A
I
guess
similar
to
what
Jason
has
where
like
I,
have
I
got
a
belated
like
do
a
test,
DNS
probe
and
it
does
happen
anecdotally
in
my
limited
I'm,
not
trying
to
diss
anyone
in
my
limited
testing
so
far,
it
seems
to
happen
more
often
on
AWS.
So
then
it
does
on
GCP
I.
Don't
know
why.
Obviously,
the
image
isn't
identical,
so
that
sort
of
thing
is
different,
but
my
hope
is
to
figure
out
on
AWS
what
is
going
wrong.
Yes,.
E
A
You,
if
you
put,
would
you
mind
putting
a
link
to
that
in
the
notes
in
the
agenda
yeah.
Thank
you.
I
haven't
found.
The
actual
there
is
I
found.
One
of
the
Danis
document
haven't
found.
The
one
I
was
actually
thinking
of
which
is
around
how
to
just
diagnose
general
networking,
but
I
hope
you
I
find
it
after
the
after
the
these
officers
Oh
another.
A
Think
that's
to
raise
the
limit,
yep
yeah,
so
I,
if
it
wasn't
for
the.
If,
if
you
were
the
only
person
that
world
reporting
this-
and
you
were
saying
this-
then
I
would
say
that
I
would
certainly
try
with
a
lower
pod
density.
I
think
it
happens.
I
think
it
happens
across
the
board
across
CNI
providers
across
everything
and
so
I,
wouldn't
necessarily
say
you
didn't
help.
But
I
wouldn't
say
it's.
The
smoking
gun
yeah.
E
A
E
A
A
A
E
Using
it
in
another
cluster,
but
that
cluster
is
not
as
highly
loaded,
I
mean
it's
not
even
it
works
and
I've
had
to
upgrade
to
the
CNI
1.2
to
their
1.2
version.
To
support
know
it
kind
of
fails
sometimes,
but
it
fails
in
a
nice
way
where
it
just
doesn't.
You
can't
allocate
new
pods,
it
doesn't
like
break,
but
it's
not
highly
loaded,
so
I
can't
give
any
what.
B
A
And
there
are
I
believe
there
are
lower.
There
are
different
there's
another
limit
when
you
use
the
ats-v,
PCC
and
I
provider
because
of
the
number
of
ian
is
and
IPS
per.
Am
I
right
so
you
just
that
might
be
the
pod
limit
pod
limit
your
head,
yeah
and
network.
A
A
J
I,
just
I
just
wanted
to
touch
base
and
see
where
that
was
adding.
We
definitely
tried
that
CD
manager
and
one
of
our
test
clusters
and
quickly
realized
that
doesn't
work
with
calico
at
the
moment.
So
I
didn't
know.
If
you
had
an
update
on
where
that
was
what
the
plan
of
attack
there
was,
there
was
anything
we
do
to
sort
of
help
out.
I,
don't
have
a
whole
lot
of
coding
time
at
the
moment,
but
I'm.
J
A
A
Well,
it
doesn't
work
with
calico
when
it
tries
to
talk
directly
to
a
CD
and
there's
a
smaller
issue,
with
the
rolling
upgrade
or
a
yeah
rolling,
upgrade
when
you're
H
a
just
that
the
logic
needs
to
be
updated
in
cups.
To
like
kick
at
CD
Manager
effectively
that
third
one
is
relatively
minor,
I
will
say:
one
meta
thing
about
the
project.
Is
we're
trying
to
move
it
out
of
COPI?
Oh
I,
guess
into
a
sick
project,
understood
cluster
lifecycle.
A
A
A
A
J
A
That'd,
be
great
I
think
am
I
right
in
saying
that
if
we
have
a
nice
upgrade
path,
the
idea
would
be
calico
V
3
with
CR
DS
yeah.
That's
my
assumption.
So
great.
Thank
you
for
service,
and
so
that's
I
guess
what
we
need
to
do
is
figure
out
that
upgrade
path
and
yeah
you
don't
to
rebuild
your
15
clusters
right.
Okay,.
J
A
So
I
think
I
hope
someone
can
be
more
knowledgeable
here.
I
think
now
is
flannel
+
can't
let
go
Network
policy,
I,
think
calico,
never
policy
uses
CR,
DS
or
network
policy
objects,
so
it
doesn't
talk
direct
to
EDD
and
I.
Think
flannel
doesn't
talk
direct
at
CD
because
it's
only
it's
only
calico,
where
we
it's
only
specifically
calico
and
not
canal,
or
we
open
up
that
firewall
rule
that
allows
access
so
I
I
think
is.
This
should
be
specific
to
calico.
I
A
A
A
So
the
challenge
is
that
I
have
been
told
to
upload
them
in
chronological
order,
which
means
I
have
to
compile
them
identically.
I
think
I
might
just
break
the
chronological
order
rule
and
they
will
go
into.
There
is
a
YouTube
channel,
but
it
is
currently
empty,
but
I
will
I
will
I
will
remedy
that?
I
will
put
that
on
the
list.
They
expect
a
lot
of
recordings
and
hopefully,
most
seen
from
multiple
order.
You
want
some
help
with
that
Justin.
Just
let
us
know
to
thank
you
with
any
recordings.
A
F
A
F
A
A
F
A
F
F
A
Yes,
I
think
Jason
also
did
a
sappy
are
saying
Jason
price,
a
little
bit
of
PR,
saying
that
we
should.
We
should
say
the
cop
server
is
stalled
out,
but
I
I
think
I'm
not
gonna,
merge
that
because
cop
server's
not
stalled
out
because
we
can
do
it
on
CR
DS,
so
the
Machine
I
don't
remember
my
status,
whether
I
actually
pushed
the
branch.
The
the
cluster
API
moved
to
CR
DS.
D
A
Machine
appeared
using
that
that
actually
it's
fairly
nice
I
did
a
POC
where
I
have
a
branch
somewhere
where,
basically,
we
have
cops
working
against
CR
DS,
which
means
you
can
pre-register
those
CR
DS
on
AG,
ke
or
another
cops
cluster,
or
an
e
KS
cluster
by
CNE,
kubernetes,
cluster
and
and
sort
of
go
from
there
and
sort
of
use
it
instead
of
the
s3
bucket
or,
and
it
uses
the
primary
store.
You
still
need
an
s3
bucket,
which
parentheses,
which
is
why,
where
some,
which
is
why
I
was
like?
Oh
we
should.
A
We
should
get
rid
of
the
s3
bucket
at
this
point
anyway,
the
so
so
it
is,
it
is
very
possible,
it
be
amusingly.
The
most
difficult
thing
is
that
our
our
api
group
is
cops
and
not
cop
stockade
studio,
and
you
can't
register
a
CR
D
with
a
single
word,
API
group.
We
might
have
to
change
our
then
them's.
The
rules
is
basically
what
I
was
told,
like
no
good
reason,
just
that
we
should
be
doing
a
fully
qualified
API
groups
and
so
yeah.
There's
I
opened
this
issue
and
I
guess.
A
F
A
A
Know
it's
I
mean
I,
so
it
was
a
really
great
idea
and
it
seems
to
work
very
well.
You
there's
the
the
challenge
of
the
API
groups
and
then,
once
once
we
even
when,
if
you
move
to
a
peer
groups,
you
still
need
an
s3
bucket,
because
node
up,
for
example,
pulls
from
an
s3
buckets,
and
so
that
is
where
that
is
where
gambles
PR
for
the
node
authorizer.
So
the
node
authorizer
basically
runs
before
afford
another.
A
What
certain
ones
go
for
coop,
cuddle,
I'm
tryna,
remember
exactly
where
in
the
flow
it
runs,
but
it
goes
and
it
gets
a
specific
or
a
node
specific
certificate.
So
and
it
does
that
by
a
HTTP
request
to
a
component
runs
on
the
master.
So
it
would
be
very
plausible
for
that
component
to
also
serve
the
files
that
are
currently
in
the
s3
bucket.
The
data
node
needs,
which
would
I
think
make
people
pretty
happy.
I
would
guess
but
yeah.
A
A
Great
and
thank
you
for
the
suggestion,
actually
it
actually
works
really.
Well
so
I
don't
know
I.
If
it
wasn't
for
the
API
group
problem,
I
would
probably
just
switch
us
over
is
then
also
our
builds
will
get
simpler,
but
anyway,
there's
there's
a
sort
of
111
first
and
then
we
can
figure
out
how
soon
we
want
to
do
112
and
maybe
make
the
make
the
cut
the
series
I.