►
From YouTube: Kubernetes Federation WG sync 20180716
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
think
necessary
people
are
here
and
I
guess
the
invite
was
also
limited
to
the
folks
were
present
and
meeting
I.
Guess
we
can
start
I
did
the
put
a
simple
provisional
list
in
the
workgroup
notes,
so
we
can
start
from
that
as
a
reference.
If
there
is
something
else,
probably
that
health
has
prepared,
then
you
can
have
a
look
at
that.
Also.
A
A
A
The
job
scheduling
the
high
level
federated
the
option
Ealing
and
the
high
level
federated
xph
I
mean
these
are
spill
overs,
which
we
actually
wanted
to
be
as
part
of
alpha,
but
haven't
being
able
to
complete
that
in
time.
So
this
is
like
with
respect
to
the
parity
that
we
won
and
I
have
listed
it
as
suave
as
we
are
already
working
on
them.
Me
and
Jesse
are
doing
some
progress
attack
in
that.
C
A
Yeah
next
item,
which
I
think
might
be
quite
useful
else
like
we
have
placement
and
placement
currently
is
a
list
of
clusters
on
the
placement
object.
There
have
been
some
brainstorming
and
discussions
around
the
label,
selector
based
placement
and
the
workflow
and
architecture
dog,
all
probably
the
dog
which
Maru
all
prepared
in
that
also
I.
A
Think
it's
an
useful
feature,
useful,
high-level
feature
and
I
believe
that,
if
needed,
we
can
actually
take
up
plus
one
kind
of
stuff
from
stakeholders
on
this,
and
if
we
think
it
has
enough
plus
once
we
can
then
go
ahead
and
create
an
issue
and
then
maybe
a
more
deeper
discussion
on
each
of
them.
That
would
be
nice
addition.
A
C
A
See
currently,
if
you
want,
if
a
user
wants
to
place
a
particular
resource
in
to
say
set
of
clusters
out
of
say
for
federated
clusters,
if
a
user
wants
to
place
the
resource
in
two
clusters
per
resource,
the
user
has
to
create
a
placement
resource.
Also,
so
it's
a
one
on
one
mapping
between
the
resource
and
the
placement
label
select
a
base
placement.
Would
the
direct
goal
of
this
is
to
reduce
the
number
of
resources
created
for
us
one.
A
A
This
sort
of
a
little
bit
overlaps
with
I
think
there
has
been
some
talks
with
respect
to
name
space
based
placement.
I
believe
there
is
no
current
implementation
of
that,
so
that
to
me
that
use
case
was
that
if
a
user
creates
a
namespace
and
labels
a
namespace
or
or
a
parallel
resource,
which
is
the
namespace
placement
resource
is
created
and
the
resources
which
are
created
within
this
namespace,
they
all
can
inherit
that
particular
placement.
So
it's
also
similar
purpose.
C
One
day,
if
we
can't
build
one
of
these
on
top
of
the
other
ones
seems
if
we
have
generalized
ability
to
take
multiple
objects,
multiple
types
or
instances
of
different
types
and
label
them.
So
today,
we'll
end
up
in
the
same
place,
then
we
could
use
them
as
building
block
and
say
and
have
some
mechanism
by
which
everything
in
the
namespace
is
subject
to
that.
D
C
D
Think
namespace
base
placement
is
just
super
with
low-hanging
fruit,
it's
implicit
I,
don't
have
to
label
anything
and
I
have
to
worry
about
relationships
between
things.
There's
already
a
relationship
supplied
moving
beyond
that
towards,
like
more
like
see
the
application
see
Rd
that
ops,
the
ops
working
group
has
been
talking
about
I
think
like
rather
than
inventing
something
I
think.
Ideally,
we
would
ride
on
what
they're
doing,
just
because
the
concept
would
be
broadcast
pretty
widely
if
you
would
have
to
educate
people.
C
B
C
I
mean
I've,
always
thought
of
that
I
agree
with
you.
There
are
different
approaches
to
achieving
the
end
goal.
I've
always
thought
of
that,
as
sort
of
mostly
a
an
implementation,
detail
and
I
think
it
broadly
boils
down
to
you
know:
do
you
want
to
install
and
manage
an
entire
Federation
control
plan?
If
what
you
want
to
do
is
have
unified,
read
access
or
some
sort
of
aggregated
status
and
I
think
we
could
probably
do
both
I
mean
it
seems
like
we
should
explore.
C
You
know
what
the
use
cases
are
and
what
the
api's
might
look
like
and
then
I
can
imagine
a
potential
implementation,
for
example,
of
federated
read
access
to
be.
You
know
something
in
Cube
control
which
just
you
know
if
there's
a
whole
bunch
of
reads
and
smashes
all
the
results
together
and
similarly
with
status,
you
can
imagine
something
similar
or
you
could
implement
it
inside
the
Federation
control
plane,
in
which
case
you
get
the
benefits
of
caching
and
a
whole
bunch
of
other
things,
you're,
removing
complexity
out
of
the
client
tools,
etc.
B
You
know
so
for,
for
example,
for
status
I
could
I
could
easily
see
making
a
very
are
a
fairly
simple
API
that
is
similar
in
certain
ways
to
so.
Like
the
the
essence
of
how
we
program
the
Federation,
the
like
the
Federation
v2
system,
to
know
about
another
resource
that
could
like
generate
a
federated
status
resource
for
something
and
then
program
a
controller
to
watch
it
in
multiple
clusters
and
populate
that
status.
I.
B
Think
that,
like
one
one
of
the
things
that's
been
in
my
head,
is
that,
like
we
have
wanted
folks
to
be
able
to
use
parts
of
like
the
the
thing
that
we
may
eventually
wind
up,
calling
Federation
without
using
the
rest
of
it,
and
so
I
think
it
would
be
great
to
prototype
those
things
as
separate
api's,
and
we
can.
We
can
even
stick
them
into
separate
github
repos.
C
I
think
I
think
that's
fine,
I
guess
at
the
end
of
the
day,
it
would
be
useful
to
have
an
easy
installation
and
an
easy
concept
of
you
know
when
you
get
Federation.
This
is
what
you
get,
and
this
is
what
it
can
do
as
opposed
to
somebody
having
to
in
a
piece
all
the
all
the
micro
fits
together
to
do
anything
useful,
yeah.
B
So
one
one
thing
that
we
have,
that
like
Maria
and
I,
have
very
briefly
discussed
on
Friday,
is
that
we
are
probably
going
to
write
an
operator
to
install
Federation
v2,
eventually
at
Red
Hat,
and
we're
still
like
exploring
this
concept
of
what
an
operator
is.
But
I
will
just
recap
for
anybody
that
that
hasn't
heard
of
this
term
before
the
concept
is
one
that
is
becoming
popular
as
a
way
of
in
like
as
a
way
of
installing
in
a
kubernetes
native
way
via
some
software
component.
Another
component
so,
for
example,
the
the
classic.
B
The
the
canonical
example
that
we
tend
to
go
to
is
one
of
the
most
well
known
operators,
which
is
the
Prometheus
operator
and
its
job
is
to
watch
a
Ciardi
called
a
Prometheus.
And
when
a
new
Prometheus
resource
is
created,
it
creates
a
an
installation
in
that
resources.
Namespace
of
a
Prometheus
and
the
Prometheus
resource
articulates
like
configuration
parameters
about
the
Prometheus
that
the
operator
is
supposed
to
install
like,
for
example,
the
the
version
of
the
Prometheus
software
and
the
the
concept.
Is
that
you,
like
the
the
conceptual
payoff
and
value
proposition?
B
Is
that
the
the
act
of
installing
and
keeping
that
software
running
is
now
the
responsibility
of
the
operator
and
you,
as
the
user,
just
declaratively
say
what
you
want
it
to
do.
So
we
are
probably
going
to
wind
up
writing
one
of
those
four
Federation
v2,
and
what
we're
trying
to
figure
out
right
now
is
whether
there
should
be.
You
know
say
that
we
played
a
movie
forward
and
we
implement
these
decompose
feature
sets
and
various
components.
B
It
seems
like
the
operator
or
a
operator
would
be
a
point
at
which
we
could
offer
like
turnkey
experience
for
installing
those
things
so
that
they
work
together.
So
you
can
say,
give
me
a
federation
with
read
access
api
installed
and
with
this
flavor
of
federated
status,
and
your
responsibility
ends
at
articulating.
What
you
want
and
an
operator
goes
and
sets
those
things
up
for
you,
yeah.
C
B
A
C
Well,
maybe
I
mean
I
think
we
have
some
sense,
so
maybe
if-
and
you
can
just
put-
you
know-
p02
people,
people
Huawei
for
for
our
you-
know,
provisional
priorities
and
all
over
whoever
can
do
the
same
for
Red
Hat
and
then
we'll
just
get
a
sense
of
you
know
by
the
end
of
the
meeting.
Hopefully,
we'll
have
a
sense
of
what's
bubbled
up
to
the
top
of
the
list
and
works
lower
down.
C
A
C
A
E
C
C
C
It
seems
to
be
a
lot
of
demand.
First
I
never
had
some
reservations,
so
an
email
thread
where
somebody
else
written
had
some
objections,
but
yeah
I
would
imagine
you
know
we
can
start
off
with
a
proposed
design
and
hash
it
out
there
there's
quite
a
bit
of
work.
That's
been
done
there
already
by
John
Hui
a
while
back.
We
can
certainly
revisit
it.
There
were
few
attempts
at
coming
up
with
the
final
design.
C
I
think
that's
reasonable,
so
so
just
revisit
in
little
in
light
of
be
to
revisit
what
we
think
the
API
should
look
like
this.
There
was
a
kind
of
an
intuitive
appeal
to
you
know,
list
resource
X
gives
you
a
list
of
these
things
and
kubernetes,
and
it
could
do
the
same
thing
in
Federation,
but
I
think
I'm,
sure
I'm
sure
we
can
have
another
bashatt
at
this
time
and
perhaps
do
better
than
you
did
18
months
ago
whenever
it
was
that
that
design
happen.
B
C
There's
a
there's,
a
pretty
extensive
design
document.
There's
a
lot
of
work,
that's
gone
into
that
in
the
Posche
I,
don't
know
if
Red
Hat
has
any
opinions
on
priority
or
whether
they
think
it's
a
good
idea.
In
brief
summary,
it's
the
idea
of
being
able
to
enforce
our
back
at
the
Federation
level,
and
the
proposal
thus
far
has
been
that
basically,
Federation
should
suck
in
all.
They
are
back
from
all
of
the
underlying
clusters
and
enforce
it
synchronously
at
the
API
level,
rather
than
accept
requests
at
the
Federation
level.
D
Seems
pretty
complicated
and
I
guess
my
my
pushback
against
the
idea
that
this
is
somehow
gonna
save
us
from
having
to
discover
things
at
the
time
of
propagation?
Is
we're
already
kind
of
on
the
hook
to
discover
things.
The
propagation
things
like
quota
probably
is
the
first
one
that
comes
to
mind
and
I
guess.
I
would
agree
that
minimizing
the
amount
of
things
that
we
have
to
discover
a
propagation,
it's
desirable
I
would
just
characterize
that
is
not
low-hanging
fruit
and
fairly
complicated.
D
C
D
B
I
haven't
thought
much
about
this.
I
I
have
been
sort
of
thinking
about
thinking
about
off
in
terms
of
for
folks
that
want
to
use
Federation
be.
Is
there
like
single
point
of
control
that
they
would
write
our
back
policies
or
potentially
use
another
authorizer
besides
our
back
at
the
level
of
the
Federation
api's,
but
I'm,
just
one
person
and
I
don't
have
a
lot
of
use
cases
yet
from
folks
from
from
actual
customers
that
color
my
opinion,
so
I
I
wouldn't
object
to
somebody
exploring
something
like
what
you
described.
Clinch
and
I.
B
C
So
I
mean
if,
if
the
user
X
went
to
cluster
Y
and
said,
do
something
and
our
Beca
lab,
and
to
do
that,
we
would
allow
it
at
the
Federation
level
and
if
it
did
not
allow
the
user
to
do
that,
we
would.
We
would
actually
disallow
that
at
the
Federation
level
already
before
we
even
go
to
the
cluster,
so
that
we
can
give
the
user
some
kind
of
synchronous
feedback
rather
than
and
wait
until
it
fails
in
the
cluster
yeah.
B
C
Conceptually,
it
could
be
done
for
any
authorizer,
provided
that
that
the
Federation
control
plane
could
actually
read
the
authorization
rules
and
enforce
them
in
a
reasonable
fashion.
So
with
our
back,
that
seems
trivial,
relatively
trivial,
because
you
just
you
know,
get
the
art
back
controller
and
the
are
back.
Api
objects,
you
Sacramento
Federation
and
you
enforce
them
with
other
external
authorizes.
I
can
imagine
that
potentially
being
more
complicated,
but
I
haven't
thought
about
it
in
great
future.
Yeah.
B
One
of
the
things
that
I've
hit
before
in
other
areas
is
that
not
all
authorizers
have
the
ability
to
actually
say
which,
which
subject
some
principal,
has
access
to,
so
that
that
might
be
something
that
we
hit
with
external
authorizers,
but
I
when
it
comes
to
kubernetes
are
back
to
the
best
of
my
knowledge.
Currently,
what
you
see
is
like
what
you
get
and
the
are
back.
Rules
are
supposed
to
fully
describe
the
authorization
policy
like
as
pertains
to
our
PAC
yeah.
C
D
Guess
I
what
I
was
thinking
about
like
pulling
the
RDoc
into
Federation
and
then
it
becomes
like
a
synchronization
problem,
but
then
also,
if
you're,
trying
to
propagate
something
like
there's
a
lot
of
things.
You'll
only
know
time
of
propagation
and
a
lot
of
the
dynamic
scheduling
would
come
into
play
so
that
initially
maybe
you're
not
intending
to
distribute
a
resource
to
a
cluster
with
restrictive
auerbach
policies
that
that
distribution
and
then
the
course
of
scheduling
you
could
run
into
that.
So
just
do
you
think
this
is
complicated.
C
Yeah
yeah
there
is
a
fairly
detailed
design
document,
I'm,
not
suggesting
it's
completely
trivial,
but
but
yeah
I
think
it's
not
the
most
complicated
thing
sure
in
the
absence
of
it,
you
end
up
with
a
whole
bunch
of
problems
and
so
yeah.
The
question
is
like
which
of
the
two
sets
of
problems
are
more
difficult
to
solve.
C
If
you,
if
you
struggle
to
give
the
user
feedback
as
to
what's
going
on,
because
you
know
auerbach
failed
and
one
of
the
underlying
clusters
and
then
you
have
to
dynamically
reschedule
or
do
something
like
that,
it
becomes
also
it
has
its
own
set
of
complexities
but
yeah,
maybe
maybe
now
it's
not
the
right
time
to
get
into
details
there.
It
sounds
like
it's
it's
desirable,
but
we
haven't
figured
out
all
the
details.
Yet
so
I
would
propose
we
revisit
the
design
dock
and
see
who's
interested
in
the
course
we
don't
know
so.
B
B
B
B
I
I,
if
you're
open
to
it
Joshi
III,
will
probably
spend
some
time.
Are
writing
a
like
a
proposal
in
the
Federation
DNS
repo
for
transformations
that
we
could
do
I
think
just
basically
to
move
the
API
resource
from
Federation
into
the
Federation
Federation
DNS
repo
and
tweak
the
API
just
a
little
bit
and
then
I
think
it
would
be
independently
usable.
A
A
Well,
I
am
like
not
very
sure
of
the
benefit
of
this
like
firm
like
this.
This
probably
would
need
to
consume
some
information
from
the
running
Federation
right.
Even
if
you
keep
the
traditional
DNS
API
outside
position
that
you
need
to
consume
stuff
from
transition,
or
is
there
a
possibility
of
continuing
the
similar
kind
of
information
from
somewhere
else
still
dependent
on
federated
services
right
so.
A
D
B
I'm
sort
of
interested
in
both
aspects
of
what
you
said,
Quentin
and
I
know
we
don't
want
to
get
into
a
design
discussion
here.
So
I'll
just
try
to
briefly
describe
what
I
want
and
I
think
that
will
make
it
a
little
bit
more
clear.
So,
for
example,
say
that
I
want
what
a
federation
DNS
does
in
the
sense
that
I
want
to
have
a
single
high
level.
Dns
name
for
my
service,
which
is
deployed
across
multiple
clusters.
B
C
Okay,
yeah,
we
don't
have
to
go
into
detail
now,
I
guess
but
I
guess.
One
observation
is
that
what
you
just
said
you
want
to
tell
this
thing.
You
know
where
all
the
clusters
are
and
what
the
resources
and
main
spaces
or
what
the
services
look
like,
and
all
of
that
stuff.
That
currently
is
a
federated
service,
and
so
you
could
come
up
with
an
alternative
format
of
telling
us
essentially
the
same
information
or
you
can
just
get
this
tool
that
you're
using
to
create
one
of
those
resources
that
already
exists
to
tell
it.
C
C
A
Did
give
it
a
lower
priority
because
it's
quite
a
complicated
thing
to
implement.
So
what
I
meant
here
by
rolling
updates
of
federated
workload
was
not
just
one
type:
the
concept
of
for
rolling
updates,
as
in
deployments
with
respect
to
our
creators,
cluster,
apply
it
to
any
resource
and
suggestion
so
rolling
update
of
that
resource.
Any
individual
questions,
yes,.
C
Yes,
that
makes
sense.
It
feels
like
we've
come
across
a
couple
of
cases
here
where
we
think
something
is
important
but
difficult
and
we've
deprioritized
it,
because
we
think
it's
difficult
I
wonder
if
we
don't
need
separate
axes
for
those
two
so
have
a
priority
and
a
complexity
or
something-
and
we
can
say
this
is
important
but
too
difficult
as
opposed
to
calling
it
not
important,
because
we
think
it's
difficult
that
make
sense.
A
E
C
B
Working
well
and-
and
we've
also
heard
some
use
cases
from
customers
that
I'm
still
trying
to
like
sift
through
and
see
if
what
they
have
described
is
what
they
really
want
were
if
there's
something
else
that
is
actually
what
they
want,
that
we've
had
we've
had.
B
Customers
tell
us
that
they
want
some
way
to
drench
cute
control
access
easily
via
a
single
API
endpoint
to
somebody.
So,
for
example,
if
if
you
request
a
new
cluster
and
the
IT
department
department
makes
it
for
you,
we
we
have
customers
that
are
like
basically
using
a
manual
process
right
now
to
distribute
like
coordinates
and
credentials
and
and
so
forth
that
they
want
some
weighted
just
like
do
this
via
an
API
and
have
people
able
to
start
immediately
using
the
clusters.
So.
C
I
think
this
item
is
actually
different
than
what
you
just
described.
This
item
is:
if
I
have
a
tool
that
only
knows
how
to
talk
to
kubernetes
clusters,
though
the
kubernetes
api
can
I
make
such
an
API
can
I
expose
such
an
API
in
Federation,
so
that
that
tool
can
work
against
the
Federation
and
I
will
make
a
quick
observation
here
that
the
original
proposal
for
Federation
v1
by
myself
was
that
the
API
not
the
kubernetes
compatible
and
that
it
essentially
looked
more
like
duration,
be
true
and
RedHat
actually
said.
No,
no.
No.
C
B
Haven't
heard
of
that
as
like
a
high
priority
recently,
no
okay
think
I
do
think
it
will
be
important
to
have
so,
for
example,
having
some
way
to
let
users
easily
work
with
Federation
without
having
to
use
the
specific
like
without
having
to
manually
transfer
them
form.
Their
resources,
I
think
is
very
important.
B
For
example,
I
think
it
would
be
extremely
useful
to
have
in
a
helm
integration
that
could
take
a
normal
chart
and
deploy
it
to
Federation
as
the
the
target
and
transform
it
into
the
federated
types
that
your
that
you
would
want
to
use
I
think
having
some
way
to
do
it,
whether
that's
like
a
manual
process
or
an
API
endpoint,
is
critical
to
driving
adoption
of
Federation.
Okay,
make
sense,
cool.
B
I'm
not
sure
if
it
means
it's
a
p1
for
us,
like
I,
said
I'm
I'm,
most
immediately
interested
in
looking
at
what
are
areas
that
we
can
explore.
While
we
give
people
a
chance
to
use
Federation
and
tell
us
whether
they
think
it
works
well
or
whether
we
need
to
change
the
patterns
of
interaction
with
it.
So
for
me,
like
the
status
API,
better
way
to
read
access,
those
seem
like
more
promising
avenues
of
exploration
that
we
can
without
potentially
going
deeper
on
a
position
that
we
might
want
to
reassess.
B
C
B
C
A
C
I
guess
it's
to
there's
two
versions
of
that:
there's
there's
just
splat.
You
know
rubber
stamp,
exactly
the
same
quote
out
to
every
single
cluster
or
some
subset
of
clusters,
and
then
there's
more
television
intelligent
allocation
of
quarters
in
such
a
way
that
you
cannot
find
quote
in
the
places
where
it's
available
and
that
kind
of
stuff.
So
it
sounds
like
you're.
Referring
to
the
latter
there,
which
I.
C
C
E
C
Okay,
well,
I
think
it
it
can
be
so
at
the
simplest
level,
you
can
imagine
creating
TVs
in
multiple
clusters
and
creating
PVCs
in
multiple
clusters
and
making
sure
there's
all
kind
of
make
sense,
because
there's
an
affinity
component
to
that,
and
then
there's
also
things
like
migrating
PDS
between
clusters,
for
example
snapshotting
in
one
cluster
and
restoring
in
another
cluster.
Just
the
more
complex
stuff
and
I
know.
The
storage
working
group
at
the
moment
are
actively
working
on
that
stuff
and
they've
done
a
few
presentations
recently.
C
D
Mean
I,
guess,
I,
see,
I,
think
PVCs
are
pretty
much
just
a
configuration.
Only
thing
I
mean
the
expectation.
Is
the
PV
there's
configuration
allow
it
to
work,
I'm
less
clear
on
how
provisioning
TVs
would
work,
I
mean
I.
Guess
you
just
have
to
know
the
configuration
you
would
be
able
to
be
used
in
every
cluster
and
Federation
would
just
be
a
handy
way
of.
D
C
I
haven't
thought
through
the
details
yet
either,
but
intuitively
it
seems
like
if
I
wanted,
to
create
the
systems
volume
X
in
all
my
clusters.
I
should
be
able
to
do
that
the
Federation
and
if
I
then
wanted
to
deploy
a
workload
that
used
to
claim
against
that
volume
in
all
clusters
it
should
work,
but
I
haven't
actually
worked
that
through
the
API
and
made
sure
that
it
makes
sense.
C
B
C
So
we
are
mostly
I
think
at
the
end
of
our
list
now,
but
one
thing
I've
wanted
to
mention
before
we
run
out
of
time.
We've
got
about
six
minutes
left
is
we're
on
the
hook
to
give
a
quick
update
or
a
multi
cluster
work
with
a
quick
update
at
a
community
meeting.
This
I
think
it's
Thursday
I
was
asked
to
do
that
and
I'm
happy
to
do.
It
seems
like
we
need
to
do
an
announcement
around
v2
alpha,
not
sure
to
what
extent
that
has
been
broadly
communicated
yet
yeah.
C
C
E
C
Right
before
we
do
that,
specifically
I
think
it
would
be
useful
to
actually
have
a
plan
for
what
we
want
to
do,
because
it
seems
like
that
could
just
be
a
piece
of
a
bigger
plan.
I
don't
want
to
delay
it
unnecessarily,
but
but
it
seems
like
a
an
itemized
list
of
things
we
want
to
do
to
make
sure
that
alpha
gets
out.
There
is
important.
A
So,
for
me,
I
would
say
the
main
intention
of
this
release
is
to
give
out
hydration
as
in
condition,
b2
and
current
state
to
the
users
so
that
they
are
able
to
set
it
up.
Use
it.
Try
it
out
that
kind
of
stuff
and
that's
not
probably
I'd
the
block,
force
or
announcement
or
whatever
should
emphasize
and
give
you
links
to
the
steps
how
they
can
do
it
like
some
documentation
is
already
been
done,
thanks
violet,
and
what
are
the
features
listed
around
it?
That's
what.
C
Yes,
I
agree:
it's
just
a
case
of
writing
down
a
list
of
things.
So,
presumably
there's
an
email
to
kubernetes,
dev,
there's
an
email
to
kubernetes
users.
There's
a
presentation
at
the
community
meeting.
There
is
blog
posts,
whatever
other
things.
It
seems
like
there's
more
than
one
thing.
We
need
to
do
and
I'm
not
clear,
yeah
I,
think
which
we
should
put
that
list
together
and
then
start
ticking
off
items
on
the
list
and
bump
rest
would
be
one
of
them
for
sure.
B
C
That's
a
good
idea:
I
think
the
community
meeting
is
on
Thursday,
so
so,
if
we
could
just
wrap
up
what
what
you
guys
want
me
to
say
at
the
community
meeting
on
Thursday
that
could
be
great
or
or
if
somebody
else
wants
to
do
it,
I
haven't
spoken.
Has
anyone
heard
from
Christian
recently
I
haven't
I've,
also
been
traveling
all
over
the
place
for
the
last
month,
so
but
I
haven't
seen
or
heard
from.
B
C
A
A
C
A
And
I
think,
but
noting
what
about
the
items
which
are
like
needed
to
ensure
that
alpha
is
actually
known
to
the
wider
audience
and
the
date
on
Thursday
are
slightly
different.
Things
like
a
bit
on
Thursday
is
about
Sigma
declared
update,
so
what
sig
multiplexer
has
been
doing,
which
includes
other
projects
also
and
and
what
we
are
talking
about,
is
sending
notice
or
sending
some
information
about.
The
release
to
the
wider
audience
will
include
more
things.
Apart
from
that
correct.