►
From YouTube: Community Meeting, January 3, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everybody
happy
New
Year.
This
is
the
kcb
community
meeting
January
3rd
2023.
It's
good
to
see
everybody
back
for
those
of
you
who
are
here.
So
if
you
do
have
any
topics,
please
just
feel
free
to
hit
the
raise
hand
button
in
in
meet
and
I
will
moderate
and
call
on
you
and
MJ.
You've
had
the
first
one
before
we
started
recording,
if
you
wouldn't
mind
asking
it
again.
B
A
Very
good
question:
we
know
that
we
need
some
amount
of
scripting
automation,
whatever
makes
the
most
sense
and
is
the
easiest
to
maintain.
A
What's
in
that
Helm
chart
for
getting
certificates
created
and
Cube
configs
and
whatnot
I
do
think
that
we
definitely
would
benefit
if
there
were
folks
who
were
interested
in
going
down
a
route
of
exploring
what
sort
of
tooling
would
be
helpful
or
even
before
we
talk
about
tooling.
What
would
you
put
in
a
series
of
operational
run
books
so
to
speak
for
the
types
of
operations
that
you
would
typically
need
to
do
when
you're
administering
a
production,
quality,
multi-shard
kcp
setup,
so
that
would
include
things
like
starting
from
scratch.
A
C
A
D
Yes,
I
was
just
wondering
what
the
plan
was
for
the
kcp
Manifest
directory,
so
in
the
main
kcp
repo.
There
are
also
manifests.
I
tried,
testing
them
a
while
back
and
didn't
actually
get
a
fully
functional
environment,
but
I
just
wondered
if
anyone
is
actually
using
those
and
if
that's
potentially
a
helpful
starting
point.
D
Obviously
it's
a
fairly
static
setup,
but
it
could
be
used
as
a
kind
of
way
to
show
how
to
deploy
a
multi-shot
environment
when
the
time
comes,
and
obviously
that's
really
got
the
proxy
and
other
pieces
and
third
generation
and
stuff
like
that.
Foreign.
E
I
think
we've
squeaked
by
without
maintaining
it,
because
we
haven't
put
as
much
pressure
on
the
multi-shared
yet
but
part
of
validating
the
multi-shard.
Actually
functions
is
going
to
be
having
probably
something
like
that
directory
in
the
repo
running
in
tests
and
actually
working
right,
yeah.
D
B
D
B
Yeah,
so
I've
basically
been
exploring
this
a
bit
on
the
side,
so
Helm
charts
there
is
a
PR
which
is
what's
closest
to
what
was
working
before
the
refactoring
and
basically
now
it's
it's
broken
after
effectoring
and
it's
redoing,
but
the
main
problem
which
I
just
read
recently
using
the
hell
one,
is
when
you're
trying
to
replicate
anything
closer
to
sharded
version
like
two
shards
from
proxy.
You
run
into
issue
of
the
assets
generation
like
shared
assets,
certificates
share,
Cube
conflicts
and
things
like
that.
B
Where
Helm
basically
is
the
wrong
tool
for
the
job,
it
can
deliver
the
content
into
the
Clusters.
But
when
it
comes
to
generating
these
things,
we
use
the
search
manager
to
do
those
things
which
implies
it's
in
the
cluster
and
you
need
to
now
starting
pulling
out
and
in.
This
is
why
I
was
raised
the
question
all
about
the
installer
and
potentially
maybe
initially
just
simple
asset
generation
entry
point
which
we
could
reuse
in
the
test.
Server.
A
Yeah
I
think
if
you
have
some
time
and
maybe
could
write
up
just
a
short
series
of
steps
that
you
think
would
make
sense
like
what
you
were
just
saying.
We
could
take
a
look
at
that
and
iterate.
A
B
A
I'm
gonna
get
there.
So
that's
a
good
good,
segue
good
question,
so
we
merged
the
giant
chunk
of
work
that
refactors
how
we
do
workspaces.
So
if
you
don't
remember
or
or
didn't
see
the
discussions
about
that
before
I'll,
try
and
Briefly
summarize
so
we
used
to
have
a
cluster
workspace
type,
which
was
real
and
stored
in
at
CD,
and
then
we
had
a
workspace
type
which
was
fake
or
virtual,
and
when
you
requested
a
workspace,
we
looked
up.
A
The
cluster
workspace
made
a
copy
of
it
converted
into
the
workspace
and
gave
it
back
to
you
from
an
end
user
perspective.
Everyone
was
supposed
to
be
using
workspaces
from
a
controller
author
perspective.
You
would
were
supposed
to
be
using
cluster
workspaces,
and
that
was
confusing.
It
was
hard
to
describe
the
differences
between
the
two
and
we
were
trying
to
simplify
that.
To
one
degree,
the
current
code
in
Main
is
focused
primarily
on
workspaces,
which
are.
A
They
are
stored
in
at
CD
as
workspaces.
We
temporarily
still
have
cluster
workspaces,
so
we've
reversed.
What
was
virtual,
so
cluster
workspaces
are
now
virtual.
The
only
reason
they
still
exist
is
because
we
had
some
end-to-end
tests
or
other
code
that
was
relying
on
cluster
workspaces
and
we
decided
to
merge
the
changes
in
their
current
form
and
then
do
follow-up
PRS
to
get
rid
of
cluster
workspaces.
A
So,
in
the
end,
when
we're
done,
there
will
be
workspaces,
they
will
be
real
stored
in
etcd
and
that's
what
you
will
interact
with
if
you're
trying
to
create
them
or
list
them
or
whatnot,
we
have
added
a
new
type
called
logical
cluster.
It
is
a
crd
and
the
presence
of
a
logical
cluster
CR
means
that
there
is
actually
a
workspace
or
what
we
you
know,
what
we're
calling
a
logical
cluster
in
terms
of
storage
in
etcd.
A
A
Every
single,
every
single
logical
cluster
is
named
cluster,
the
the
parent
or
like
where
this
cluster
CR
is
created,
like
it
used
to
be
you'd,
see,
root,
root,
compute
root,
my
org,
my
team
you'd
have
the
homework
spaces
and
all
that
would
be
encoded
in
the
URLs
that
you
use
to
interact
with
workspaces,
and
it
would
also
be
encoded
in
the
key
segments
in
FCD.
A
F
So
there
was
a
comment
in
the
code
saying
all
this.
Is
that
comment
updated.
A
I
assume
it
is
accurate,
yes,
so
I
just
pasted
into
chat.
This
is
the
logical
cluster
named
cluster.
Inside
of
you
see
it's
c-a-v-f-u-u-1,
something
or
other.
This
is
actually
for
the
root
colon
compute
workspace,
so
root
compute
looks
like
this,
and
the
system
kcp
itself
takes
care
of
creating
these
logical
clusters
for
you.
So
when
you
say,
hey
kcp,
please
create
a
workspace
called
root.
A
There
is
a
list
of
follow-ups.
Let
me.
A
A
F
I
wasn't
following
what
you
said
by
two
questions
first,
so
this
logical
cluster
is
I.
Think
it
is,
is
virtual
or
is
it?
Is
it
it's
stored,
and
you
said
it's
a
Singleton
I
didn't
follow
that
part.
C
A
C
Where
is
the
name?
Is
the
encoded
extra
name
exclusively
for
root?
Compute
and
cluster?
Is
the
name
of
the
logical
cluster.
A
If
I
were
to
create
a
workspace,
called
Happy
New
Year,
and
we
would
end
up
with
something
that
looks
like
some
other
value
here
and
then
if
I
were-
and
so
this
is
also
like-
it's
a
Singleton
within
the
context.
F
Right,
so
it's
not
a
Singleton
as
far
as
everybody,
that's
not
looking
everybody
except
people.
Looking
at
Casey
at
NCD,
it's
not
a
single
phone.
Okay,
yeah.
F
F
C
F
Then
those
are
there's
one
of
those
for
every
workspace,
plus
a
few
more
system
ones.
Now,
are
you
telling
me
that
in
every
one
of
those
logical
clusters,
plain
English,
there
is
one
logical
cluster
CR.
F
F
E
I
think
we'll
probably
take
some
time
to
somehow
illustrate
these
Concepts
in
a
consumable
documentation
with
drawings,
sort
of
thing.
Yes,.
A
Now
that
the
workspace
refactoring
has
landed
I'm,
hoping
to
spend
a
good
portion
of
my
time,
working
on
approachability
and
understanding
four
Concepts,
so
we've
talked
about
splitting
the
repo
up
and
making
the
readme
clearer
I
want
to
work
on
that
this
month,
as
well
as
making
it
so
that
it's
as
easy
as
possible
to
understand.
If
you
are
a
user,
what
do
you
interact
with
and
what
concepts
do
you
need
to
know
if
you
are
a
developer
working
on
Con
on
multi-work
Space
controllers?
A
So,
in
summary,
the
three
Factor
has
landed.
It
is
a
breaking
change
or
a
series
of
Breaking
changes.
So
if
you
have
existing
NCD
data
from
kcp
0.10
before
the
refactor,
you
will
need
to
back
it
up
and
restore
it.
If
you
want
to
keep
it
or
just
wipe
that
CD,
the
Helm
chart
has
not
been
updated
to
work
appropriately
with
some
of
these
changes,
Steve
Hardy
and
I
were
working.
What,
for
the
past
hour
and
a
half
on
some
like
working
through
what
sort
of
changes
we
need?
A
So
look
for
those
over
the
next
few
days.
I
would
imagine.
A
All
right
anybody
else
have
any
questions
or
comments
or
topics.
A
All
right,
well
I'm,
welcome
to
welcome
back
everybody.
I
hope
you
had
a
good
New
Year,
hopefully
we'll
get
back
in
the
swing
of
things
and
I
know:
I
have
a
bunch
of
PR
reviews
and
things
to
deal
with.
So
it's
good
to
see
everybody
and
I
hope
you
have
a
good
rest
of
your
week
and
I'll
see
you
next
week.