►
From YouTube: 20190605 scl kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
June,
15
or
June
5th
2019.
This
is
the
standard
Canadian
office
hours.
As
always,
we
have
a
standard
Code
of
Canada
conduct
policy,
which
is
basically
boils
down
to
top
feature.
We
have
a
agenda
packed
with
serious
benchmark
stuff,
but
I
will
get
the
four
PD.
No
it's
shared
here.
C
Right,
yes,
so
just
come
out
of
stuff,
we've
been
doing
with
people
with
20
use,
given
Fe
I.
Guess
what's
done
here
with
went
through
the
CI
a--
benchmarks
for
communities
and
dhaka
against
clusters.
Dub
essentially
brought
up
with
key
radio
and
what
we're
finding
is
lots
of
false
positives
around
the
way
that
a
lot
of
the
tooling
that's
been
built
around
cas
benchmarks
is
bullseye,
surround
reppin
for
setting
in
configuration
files
and
often
they're
checking
flags
when
we
moved
to
component
config
for
couplet,
etc,
and
then
additionally,
there's
some
improvements
we
could
make.
C
If
you
want
a
properly
secured
cuff,
if
you
want
a
cluster
that
is
properly
police,
guys
benchmarks,
we
need
to
be
able
to
modify
the
static
pod
definition.
So
how
do
we
do
that?
So,
for
each
of
these
issues
have
given
some
suggestions
that
might
work
or
might
not
work,
but
they're,
just
ideas
based
it's
open
to
doing
that.
Some
folks
think
up
the
document
or
the.
B
Sword
we
want
us
to
go
one
item
at
a
time.
I
basically,
I,
like
I,
have
comments
for
everything.
Probably
other
people
has
have
comments
as
well
for
the
health
problems.
You
should
probably
basically
explain
each
problem
or
the
timer.
We
can
have
basically
collect
some
opinions
from
the
people
present
yeah.
C
C
There
was
a
suggested,
nothing
from
just
in
to
add
a
unauthenticated
health
endpoint
that
could
be
used
regardless
of
whether
or
not
authentic
requesters
on.
But,
additionally,
it's
not
actually
that
big,
a
security
risk
because,
as
we
do
actually
check
authorization
regardless
apart
from
the
health
symmetric
standpoint,
so
the
question
whether
or
not
this
control
is
correct
in
itself
or
adding
an
additional
endpoint.
So.
A
A
B
D
So
he's
been
getting
involved
in
the
in
discussions
on
the
benchmark,
I'm,
just
gonna
try
and
actually
a
little
bit
of
background
here.
Cx
is
a
kind
of
community
project.
So
if
anybody
does
want
to,
you
know
jump
in
and
getting
involved
in
the
discussion,
you
just
simply
have
to
sign
up
and
join
on
the
kind
of
work
bench
top
CIA
security
org
site.
D
So
you
know
it's
absolutely
up
for
debate
and
discussion,
the
latest
version
or
so
I'm,
looking
at
the
latest
draft
and
I've
just
moved
off
the
page,
but
the
the
latest
draft
does
have
the
test.
Still
saying
that
you
know
you
should
ensure
that
anonymous
also
currently
set
to
false.
But
there
is
a
national
paragraph
that
says,
if
you're,
using
all
that
authorization,
it's
generally
faceted
reasonable
to
allow
anonymous
access
to
that
server
for
health
checks
and
purposes,
and
hence
this
recommendation
is
not
school.
D
B
D
D
D
Cis
is
working
with
the
vendors
to
start
trying
to
do
the
kind
of
vendor
specific
versions,
but
before
we
do
that,
we
want
to
get
all
the
members
to
do.
There's
going
to
be
a
branching
point
when
that
happens,
and
we
want
to
try
and
make
sure
that
everything
that
can
be
put
into
the
common
version
of
respect
the
our
streams
back
does
happen
before
this
branch
anything
happens.
Otherwise
it's
going
to
be
a
complete
nightmare.
C
Yeah
thanks,
so
second,
one
is
checking
for
commissions
of
their
CD
directory,
so
for
Kubo,
DM
destroyers,
which
they're
trying
to
get
everyone
to
adopt
we're
using
static
pod
manifests
for
at
CD.
So
it's
a
host
of
volume,
mount
City
commonly
ones
this
route
anyway
within
within
the
pod.
So
there's
not
a
possibility
to
specify
a
net
CD
user
or
group
because
we're
not
in
installing
at
CD
on
the
host,
so
I'm
not
sure
what
to
do
about
that.
One
really
I.
D
Would
say
the
best
thing
is
and
raising
ticket
in
the
CIS
workbench,
because
it
might
be
completely
reasonable
to
set
to
change
that
to
say
it
has
to
be
owned
by
XD
or
and
I
read,
I'm,
pretty
sure
some
of
the
other
files
and
directories
are
marked
as
requiring
to
be
owned
by
root.
So
you
know
I,
don't
want
to
say
definitely
but
I
think
that
would
be
a
reasonable
thing
to
suggest.
A
Because
we
don't
actually
install
a
service
and
run
it
as
a
service
as
at
CD,
it
seems
a
little
weird,
because
the
whole
crew
bleed
itself
is
running
as
root
right.
It
kind
of
needs
to
so
the
or
could
be
run
as
kubernetes
kubernetes,
so
I
think
updating
the
CIS
benchmark
to
allow
for
multiple
conditions,
because
this
this
is
predicated
on
the
notion
that,
like
you,
have
basically
setup
your
deployment
a
very
specific
way.
E
A
C
I've
been
kind
of
test
I.
Think
I
am
those
may
be
using
the
desk,
a
cops
inspect
one
so
checking
use
of
host
network.
They
say
we
do
that
place
design
with
your
very
own
deploys
because
we
using
static
fraud.
So
again,
this
is
like
with
cue
rhodium
specifically
uses
kubernetes
itself
to
deploy
control,
plane
components,
so
they
will
necessarily
use
the
host
Network.
They
say
so.
We
need
to
either
have
some
sort
of
it
way
to
exclude
things
and
this
glue
these
or
not.
The
sim
was
something
I'm,
not
sure.
Okay,
if.
A
It's
not
a
control
plane,
you
know,
I
mean
if
it's,
it
was
not
a
control
plane.
Node
I
generally
agree,
but
there
are
user
stories
or
scenarios
where
you
know
a
user
might
want
to,
and
it's
actually
it's
probably
it's
a
security
risk
for
sure.
But
I
don't
know
how
are
we
treating
the
CIA
s
benchmark?
Are
we
treating
it
as
like
a
canonical
source
of
security
concerns
or
like
a
advice,
I.
C
Mean
it's
more
healthy
that
they
are
specifically
security
incident
organizations
where
kinetic
vignette
that
will
the
pentas
fear
as
gospel
and
we'll
face
it,
and
often
these
security
things
don't
have
a
massive
amount
of
kubernetes
specific
knowledge
so
like
arguing,
you
then
have
to
make
a
case
like.
Why
is
white?
Why
is
this
red.
A
D
The
the
spec
does
give
a
little
bit
of
leeway
here
so
I'm
looking
at
the
latest
draft,
so
I'm
not
sure
if
this
is
what's
in
the
current
release
version,
but
it
says,
do
not
generally
commit
containers
to
be
run.
The
host
Network
flag
says
to
tree,
and
it's
basically
talking
about
using
hot
security
policies
to
disallow
containers
from
sharing
the
host
Network
namespace.
D
D
If
that
seems
too,
if
setting
up
the
PSP's
to
allow
that
is
to
kind
of
onerous
or
there's
obviously
the
issue
of
as
soon
as
you
turn
on
PS
PS,
you
may
render
it
difficult
to
actually
get
anything
to
run
at
all,
so
it
might
be
worth
some
discussion,
but
it
does
have
that
word
do
not
generally
commit
containers
to
be
run
with
host
networks,
so
there
is
a
little
bit
of
like
I.
Think.
A
B
Heard
I
had
a
quick
look
of
how
to
enable
peace,
pinky,
baby,
em
and
actually
the
UX
around
that.
It's
not
that
easy,
because
currently
we
have
the
mod
restriction
admission
controller
and
it
is
conflicting
with
the
pond
security
policy
admission
controller.
So
we
have
to
first
start
the
API
server
with
a
multi
restriction,
admission
controller
I
modified
the
manifest
to
enable
the
pond
security
policy
admission
controller.
But
before
that
we
have
to
add
the
policies
themself
and
then
this
after
we
modify
with
the
new
admission
controller.
B
C
A
bit
you
can't
do
it
and
we
do
do
it.
We
did
it.
So.
The
interesting
thing
that
you
have
is
there's
the
possibility
for
what
policies
do
not
get
applied
to
the
static
pods
static.
Pods
continue
to
run
regardless.
What
happens
is
the
mirror?
Pods
don't
appear
in
the
API
server
until
you
have
a
pod
security
policy
that
lets
them
through,
and
this
is
yes,
so
it's
they're
terrible
UX,
but
it
is
possible
right
now,
when
we
do
do
that.
That's
how
we
do
it
with.
D
A
A
B
Think
this
falls
in
between
the
balance
of
the
minimum
viable
cluster
and
a
question
that
the
users
want
to
make
an
extra
step
more
secure.
So
we
follow
the
logic
of
minimum
viable
questioning
cube
ADM.
So
we
basically
need
to
evaluate
if
this
minimum
viable
cursor
is
insecure.
With
the
aspect
of
the
host
network
for
containers,
I.
A
Don't
know
I
can
see
your
point,
but
I'm
also
like
this
is
a
control
plane,
node.
It
needs
to
be
treated
special
unless
you
actually
have
it
crafted.
You
know
if
you
running
these
via
system
to
units,
you
don't
have
this
problem
right,
but
that's
different.
It's
a
different
deployment
model
than
what
we've
chosen
for
comedian
because
of
yeah.
C
You
basically
have
to
apply
the
PSP's.
If
you
running
static
lots,
there's
no
way
around
it.
Otherwise,
you'd
never
see
like
control.
Plane
carries
on
running,
but
you
never
see
them
so
you
have.
You
must
apply
a
pod
security
policy
that
applies
to
the
cube
system.
They
say
so
that's
thing,
that's
going
to
happen
regardless
I.
Don't
think
we
need
to
enforce
that
with
cube
ATM,
but
I.
A
C
Think
the
sea
ice
pack
smart,
was
aiming
to
check
that
you
are
configuring,
the
TLS
sir,
for
this
for
the
Cougar
client
certificate.
But
since
we
now
shifted
to
TLS
bootstrapping,
you
don't
actually
need
to
configure
that
anymore.
It's
all
configured
to
the
component
conflict,
most
of
the
testers
checking
explicitly
for
that
configuration
saying,
I.
D
E
A
Next,
one
cook
it
server
certificate
rotation,
the
CAS
benchmark
requires
enabling
configured
rotate
kulit
server
certificate.
However,
the
servlet
certificate
rotation
is
not
fully
implemented
in
communities.
Couplets
are
able
to
submit
certificate
signing
request,
but
at
present
there
is
no
automatic
signing
mechanism.
This
is
because
the
CSR
mechanisms
is
not
enough
information
about
the
host
identity
to
reliably
issue
certificates.
Many
operators
work
around
this
by
using
River
staff
mechanism
that
allows
that
automatically
allow
is
all
CSR
requests.
A
Introducing
more
security
vulnerabilities
host
entity
should
be
considered
as
higher
concerns
and
could
be
solved
within
projects
such
as
a
straight
guy
solutions.
We
do
not
believe
that
controllers
should
continue
to
exist
until
the
host
identity
problem
is
solved
right
now,
kuba
DM
will
produce
a
self-signed
certificates
during
the
bootstrap.
This
is
currently
the
best
option
without
external
cluster
orchestration,
such
as
cluster
API.
A
second
option
is
to
change
scheme
methodologies,
I.
C
B
Well,
the
only
problem
with
this
is
that
by
self
signing
the
service
is
difficult
for
the
couplets
is
that
now
we
are
blocking
a
regular
users
of
the
metric
server,
so
we
possibly
might
have
to
write
a
troubleshooting
guide
entry
about
this
and
explain
how
users
should
do
it.
I
mean
one
of
the
ways
is
for
them
to
sign
the
service
certificate
with
the
Questor
CA,
but
to
my
knowledge,
nadir,
you
had
an
alternative
proposal.
Maybe
we
can
write
something
in
the
tears
guide.
Basically,
yes,.
C
Actually,
if
you
look
at
prometheus
operator
today,
they
do
you
have
in
their
JSON
that
stuff,
they
do
have
a
thing
specifically
peculiar.
So
can
we
do
self
signing
right
now,
like
I?
Think
we
just
do
it
want
some
food
and
we
never
do
rotations
because
there
isn't
the
sizing
mechanism.
So
that's
a
really
in
place.
I
think
most
of
the
people
who
are
using
metric
server
have
got
workarounds
in
place.
They
might
not
be
fully
documented
everywhere,
so
you're
right.
C
A
A
C
C
Other
things
like
oh
saying,
you
haven't
configured
logging
to
an
external
service
in
the
daka
demon,
but
whereas
most
cuba,
Nettie's
users
are
going
to
detroit
even
set
to
ship
logs
to
an
external
service,
so
it
makes
sense
in
the
context
of
users
of
sockets
warm.
But
it
doesn't.
We
don't
make
sense
in
the
context
of
kubernetes.
A
C
I
think
maybe
we
need
a
subset
of
the
docket
Lexus
I.
Think
some
of
the
tested
are
reasonable,
but
not
all
of
them
to
me
might
need
some
sort
of
subnet
of
this
for
each
container.
One
time,
maybe
like
you're
using
contain
one
time
big
communities.
These
are
their
tests,
which
are
appropriate
in
this
situation.
D
Yeah
I
was
just
taking
a
look
because
I'm
I'm
less
familiar
with
the
docker
benchmark
than
I
am
the
kubernetes
one.
But
the
good
news
is
it's.
The
same.
People
see
I
outside,
say,
I.
Think
I
will
actually
are
taking
action
here
to
have
a
chat
with
him
and
that
also
the
docker
benchmark
and
see
whether
they
want
to
have
a
one
option
would
be
to
mark
their
sections.
D
A
D
D
C
Yes,
I
think
this
there's
a
couple
of
things
here
so
I,
don't
know
what
say
no
I
want
to
go
back
to
the
scanning
methodology
briefly.
So
so
we
have
this
issue
with
manage
also
with
managed
control
things
like.
How
do
you
know
the
configuration
of
the
convenience
cluster
without
scraping
it
configuration
files
just
feels
like
a
brittle
way
to
do
that,
and
is
there
another
way
we
could
do
this
and
it's
not
really
a
qadian
question.
That's
probably
a
bore.
They
give
it
a
ques
question.
E
C
E
C
A
And
point
should
we
have
like?
Let
me
talk
with
super,
submit
the
issues
with
the
serious
group.
Should
we
basically
help
to
spell
out
like
the
marketing
around
this
needs
to
be
clear
that
a
lot
of
this
is
advisory
and
there's
a
bunch
of
constraints
and
conditions
that
apply,
because
I
think
the
idea
of
treating
it
as
gospel
is
can
sometimes
be
antithetical
to
security.
As
you
pointed
out
in
a
couple
of
the
issues
right,
yeah,
so
I
think
we
should.
A
C
And
the
final
thing
is
so
all
of
our
containers,
with
this
inoculating
crest
as
well
all
of
our
images
runners
we
right
now
like,
and
these
we're
up
evers
in
the
dock,
a
bit
doctor
benchmarks
throughout,
like
you
you're
running
containers
too
sweet.
So
are
there
any
plans
to
re-engineer
these
images
like
most
of
them
do
not?
81
can
be
run
as
a
non
ruther.
I
think.
A
We
can
fix
that
I
think,
especially
if
we
have
these
are
namespaces
in
the
future.
That
makes
it
a
lot
easier
because
that's
the
whole
idea.
You
have
the
idea
that
you're
running
as
root,
then
it
gives
them
access
to
all
the
features
that
they
would
want.
I,
don't
think,
there's
any
restrictions
for
most
components:
I'm
trying
to
think
off
that
my
head
effect
undergoing
I,
like.
C
B
Go
ahead,
Lia,
who
is
taking
notes,
sent
the
pr2
basically
partially
enable
this,
but
we
couldn't
get
this
into
this
release
of
coverage
so
possibly
for
the
next
release.
We
are
going
to
evaluate
what
is
the
best
way
to
customize
the
static
pods
in
with
some
of
these
ways.
Configure
them
I
have
a
quick
comment
here.
More
like
question:
isn't
to
proxy
requiring
route
as
well.
It's
required.
C
C
C
A
Yeah
but
I
don't
want
to
make
sure
that
it
like,
when
we're
doing
a
benchmark,
we're
doing
useful
things
like
yeah.
We
should
enumerate
the
the
state
spaces,
one
CNI
and
the
proxy
a
little
awkward
here,
and
so
one
thing
we
did
a
long
time
ago.
Just
that's
related
to
sort
of
security
checking
is
we
did
like
an
arm
back
constraint,
analysis
for
running
the
antenna
tests
and
we
found
out
that
like
in
order
for
you
to
get
an
actual
seen
on
list
of
our
back.
F
A
B
So
the
benchmark
had
a
lot
of
items
that
basically
said
that
cube
ATM
is
not
completing
them
and
one
of
those
like
are
only
possible
to
configure
it
uses
an
extra
arcs
again.
I
wanted
to
point
out
that
this
is
something
the
benchmark
covers,
but
it's
a
wider
problem
with
component
config
again
and
I.
Don't
think
it's
useful
to
even
report
these
things
until
we
have
this
it
is
it's
not
useful.
There
are
so
many
items
that
basically
do
not
it
like
extra
R
xx
arcs.
C
D
A
I
think
there's
two
parts
to
this
story.
What
part
is
working
with
the
CIS
team
to
help
try
and
change
marketing,
branding
awareness
and
the
tests
themselves?
I
think.
The
second
part
is
a
potential
document
or
people
who
are
evaluating
qadian
for
the
deployments
that
outlines
basically
the
details
that
you
wrote
up
inside
of
your
discuss
posts.
Maybe
after
we've
gone
through
a
loop
back
and
forth
with
this
US
team.
E
Also,
are
we
planning
or
willing
to
help
out
with
with
like
tools
like
you
bench
itself,
like
the
things
that
actually
are
executing
these
verifying
these
tests?
Are
we
gonna
like
help
aqua
or
like
do
our
own
or
like
what
I'm
going
to
use
to
them
verify
and
have
a
set
of
best
practices,
an
easy
tool
to
use
in.
A
My
opinion,
just
being
like
blunt
and
honest
I,
don't
think
it's.
If
we're
talking
about
security
for
deployments,
it
should
be
built
into
the
intent
test
suite
as
a
profile
that
gets
run
as
part
of
the
default
test
mechanisms.
I,
don't
think
this
should
be
external
to
core
communities.
If
this
is
a
recommendation
or
basically
telling
people
right.
A
E
B
F
C
G
Only
other
comment
is
how
user
facing
documentation
to
make
sure
that
we're
updating
a
place
during
the
onboarding
kind
of
process
like
where
the,
where
people
first
start
learning
to
use
Covidien.
They
should
be
able
to
see
notes
about
the
CIS
benchmark
and
about
options
that
they
need
that
are
non
default
in
order
to
be
compliant
them.
A
B
G
So
this
is
a
bit
of
a
shift
if
you
haven't
had
the
ability
to
get
up-to-date
on
kind
of
what's
happening
in
the
add-on.
Sub-Project
Justin
and
I
have
been
working
to
kind
of
iron
out
some
details
with
regard
to
how
add-ons
actually
get
installed,
and
the
goal
is
to
build
some
shared
mechanisms
and
interfaces
that
can
be
vendored
into
projects
like
cups
and
Covidien
so
that
they
can
source.
You
know
add-ons
packages
from
similar
locations
not
like
the
same
location
and
I,
wanted
to
just
run
a
few
ideas
by
all
of
you.
G
So
the
first
bit
is
they'd
we'd
like
to
keep
things
as
simple
as
possible.
A
lot
of
add-ons
just
use
good
CTL
to
apply
things
and
could
CTL
does
have
the
vendor
customized
support
into
it
now
I'm
curious
Evan.
If
anyone
has
any
objections
to
using
customize
for
packaging
of
add-ons
the
benefits
being
that
you
get
like
some
more
sophisticated
support
with
the
customize
overlays
to
do
things
like
templating
values,
selecting
namespaces
for
which
things
are
installed
sanitizing.
What
labels
your
tool
is
is
upon
buying
your
tool
can
also
apply
annotations,
and
things
like
that.
G
This
would
allow
you
to
know
like
swap
out
images
on
the
fly
and,
and
then
it's
patchable
alright,
so
users
can
specify
their
own
patches.
We
would
be
interested
in
building
a
component
config
portion
around
customization
Yamhill
in
order
to
support
this
kind
of
packaging
effort
and
make
it
standardized
in
the
way
that
installers
are
loading
these
packages
for
installation.
So
that's
the
gist
of
it.
Does
anybody
have
any
comments
or
concerns
with
that?
So.
G
G
A
H
B
One
of
the
problems
is
that
we
have
to
figure
out
the
packaging
situation
again.
Basically,
we
already
recommend
keep
Cairo
as
an
installation
package
to
do
an
upgrade
in
some
of
the
other
tutorials.
So
I
don't
think
it's
gonna.
Be
that
much
more
probably,
if
we
say
hey,
you
also
have
things
to
customize.
If
you
want
to
support
others
from
qadian,
so
I
think
it's
okay
in
general,
but.
E
B
G
A
G
G
I
think
that's.
The
first
requirement
yeah
is
to
just
have
general
apply
support
with
all
of
the
necessary
merge
semantics,
because
you're
gonna
need
that
to
do
simple
upgrades.
You
know
from
the
customize
package
and
then
the
fact
that
customize
is
already
vendored
into
could
CTL
is
just
kind
of
icing
on
the
cake.
It
keeps
it
very
simple,
but
if
we
move
ever
to
serve
beside
apply
packages,
then
we're
gonna
need
to
also
vendor
in
the
customized
stuff
beforehand,
or
else
you'll
have
to
show
something.
I
I
G
For
those
things
is
fine
for
a
proof-of-concept
but
yeah.
There's,
there's
a
lot
of
kind
of
API
machinery
bits
in
here
that
need
to
be
proofed
and
it's
kind
of
a
significant
effort
so
doing
it
externally
from
Covidien,
maybe
in
the
add-on
operators
repository
first
and
then
having
something
that
we
can
then
like
put
into
a
proper
home
and
then
Thunder
em
could
be
good,
because
these
libraries
will
ultimately
need
to
be
shared.
So
if
we
start
out
externally
from
the
repo
anyway,
then
you'll
have
the
separation.
H
H
G
To
the
second
point,
there
and
I
really
like
that.
You
brought
that
up,
because
it
is
a
little
bit
of
a
problem
right
now,
since
our
add-ons
are
basically
hard-coded
into
cube
idiom.
They
are
commit
by
commit
versioned
into
the
into
the
binary,
which
means,
if
you
use
like
kuba,
dam1,
11.1
right
versus,
like
one
11.4
like
if
there
were
code
changes
to
the
add-ons
they're
guaranteed
to
be
likes
thoroughly
tested
with
that
version
of
kuba.
G
Damn
when
we
move
to
an
external
packaging
solution,
that's
like
pulling
something
in
with
the
rest
over
the
network,
you're
a
lot
more
similar
to
the
docker
images
right
so
like
we
have
to
bump
our
docker
image
versions
and
I.
Think
we
calculate
those
currently
based
off
of
the
kubernetes
version
flag
or
the
introspected
version
from
the
kabedon
binary.
So
I
would
think
that
we
would
do
a
similar
pattern
to
determine
what
the
proper
like
upstream
packages
are
for.
Things
like
core
DNS
and
coop.
F
G
H
H
G
And
then
just
to
kind
of
refactor
wording
a
little
bit
as
I
believe
that
the
add-on
manager
is
implemented
likely
as
libraries
that
will
be
vendor
into
Covidien
us
in
the
middle
of
tools
like
video
and
pods,
that
with
privileges
short
period
time
as
they
meet
the
clusters
resources
so
likely
not
a
separate
micro
service.
That
kuba
diem
is
talking
attend,
Takei
ting,
but
rather
it
just
being
built
in
directly
into
it.
With
some
small
libraries.
B
G
G
G
A
A
bunch
of
logistics
here
that
it's
just
going
to
take
a
time
to
make
it
clean
like
we
should
start
something
very
simple,
like
record
eNOS,
for
example,
and
work.
One
thing
at
a
time
because
there's
a
bunch
of
constraints
about
other
UX
workflow
that
exists
for
air-gapped,
installs
and
stuff
that
we're
gonna
have
to
make
sure
that
are
clean.
Alright.
So
the
problem
is
that
we
have
legacy
code,
that
we
need
to
make
sure
that
I'll
work
with
all
of
us.
There's.
E
B
G
But
yes,
let's,
let's
get
some
proof
of
concept
code
I
just
really
wanted
to
vet
some
of
these
ideas-
and
you
know,
have
people
poke
them
and
figure
out
where
they
fell
apart.
So
thanks
for
all
of
the
comments
with
regard
to
that
I
think
I've
heard
it
sounds
like
we
in
general
support
the
approach
and
don't
have
too
many
major
roadblocks
and
we're
gonna
be
able
to
work
through
these
things.
So
I.
A
B
Wanted
to
make
sure
that
cube
Caro
is
going
out
of
three
and
they
are
decoupling
from
the
kubernetes
version.
So
it's
might
create
a
bit
of
a
complication
with
in
terms
of
the
Scheldt
course
or
stuff,
and
also
I
have
a
little
bit
of
a
popcorn
plans
in
terms
of
how
they'll
go
to
manage
this
queue.
It
the
rest
of
the
conference
I
just
want
to
mention
that.
E
Think
they're
gonna
be
problems
with
chilling
out
because
then
we're
operating
a
binary
level
not
staging
go
that's
level,
but
yes
for
rendering
I
think
it
just
kind
of
gets
cleaner,
I
hope
at
least
I
don't
know
exact.
But
please
just
we
don't
we're
running
out
of
time
here,
but
please
link
to
the
discussion
of
cubes.
It's
all
moving.
Now.
G
Yeah
I
mean
that's
all
I
had
thanks
for
for
working
through
those
issues
with
me,
I'll
I'll
get
them
into
the
design
dock
for
the
addons
project.