►
From YouTube: 20200225 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
Sure
we
bought
it
recently.
Support
for
ECDSA
in
Cuba,
diem
is
experimental,
is
feature
gated.
Currently
you
can
during
the
init
command.
You
can
sign
your
private
and
public
keys
using
the
CD,
a
CSA
algorithm
instead
of
Pharisee
during
upgrades
a
certificate
rotation,
the
the
existing
algorithm
is
going
to
be
used,
so
you
cannot
switch
between
there
sa
sgsa
on
the
fly.
Currently,
we
are
looking
for
feedback
on
the
future.
A
This
is
interesting
as
an
open
source
project.
I,
don't
know
if
we
are
bound
by
any
import
or
export
restrictions
or
whatnot
regarding
the
stuff,
because
I
do
know
that
the
elliptical
curve
does
have
export
restrictions
on
it.
Be
nice
to
like
these
talk
with
the
release
team
to
see
if
they
can
have
some
notion
here
or
some
statement
regarding
this.
B
B
C
Justin
yeah,
it's
a
great
point
to
bring
up.
I
did
just
grep
in
the
source
code
and
we
already
have
a
reference
in
our
JWT
support
to
ECDSA.
So
I
would
imagine
that.
Therefore
we
are
not
doing
anything.
We
haven't
already
done
so
we
put
it
that
way,
but
I
think
it's
a
great
idea
to
reach
out
and
double
check.
C
A
I
think
it's
more
about
the
artifact
generation.
I
know
you
can
have
the
source
code
available,
but
I
think
the
actual
output
of
the
artifacts.
That's
usually
the
way
it
was
my
previous
life.
Where
you
had
export
restrictions,
you
would
literally
crib
it
out
of
the
code
to
prevent
it
from
being
in
the
actual
artifact
or
product
that
gets
created,
so
yeah
I
think
we
should
reach
out.
D
B
Artifact
generation
do
being
like
generating
the
keys
using
the
songbirds,
but
then
you
cannot
distribute
the
keys.
The.
A
A
B
B
Yesterday
I
heard
the
signal
is
beating
and
we
had
a
bit
of
a
debate.
So
we
currently,
we
are
blocked
on
release
tooling,
and
my
cap,
originally
from
moving
cubed
amount,
was
proposing
something
which
was
building
cube,
a
team
from
Allah
group
directly.
This
idea
was
mostly
rejected
and
then
I
proposed
some
alternatives
to
build
cube
ATM
from
KK,
so
that
Hannah
can
just
pick
the
medium
artifact
the
way
it
is
currently
so,
both
both
proposals
in
objection
from
silliness.
B
B
A
I
know
I've
kind
of
been
a
ghost
lately.
I
will
be
trying
to
at
least
in
the
next
cycle,
be
more
presence,
so
I
can
help
eliminate.
Do
the
political
drum
beating
work
possible
to
try
and
get
things
in
the
shape?
Maybe
move
some
folks
around
to
help
out
with
the
resourcing
of
the
release
team
blocking
issues.
C
B
So
the
original
proposal
was
that
we
continued
releasing
cube
ATM
as
part
of
the
motor
boats,
that
we
currently
released
heavy
cycle
and
also
I
guess
for
a
part
of
the
depths
and
rpms
and
I
proposed
different
ways
to
solve
the
problem
and
for
the
first
one
I
understood
that
this
and
ago
is
very
complicated
to
do
it.
We
have
to
inject
a
lot
of
logic
inaudible.
B
The
second
approach
is
kind
of
slightly
confused,
because
if
we
include
cube
ADM
building
inside
the
make
release
command
of
KK,
we
can
produce
an
artifact
places
in
our
output
folder
and
then
the
release
tooling
can
just
use
it.
The
same
is
currently
built
from
C&D
/
Romania,
so
I'm
I
was
very
confused
yesterday,
but
I
was
assured
by
folks,
like
Tim,
pepper
and
several
gooses,
that
there
are
complications.
That
cannot
be
explained
in
five
minutes,
so
I'm
going
to
have
another
meeting
with
them
and
hopefully
I
can
understand
the
problem
better.
A
So
one
of
the
things
I
want
to
figure
I
should
go
to
them
in
this
conversation,
but
I
think
I
wouldn't
really
want
to
start
it
here
to
see.
If
it's
the
correct
conversation
have
right
now
we
as
a
project
create
a
set
of
artifacts
which
are
kind
of
like
a
pseudo
mini
distribution.
We
don't
have
a
way
of
unifying
this
and
the
existing
to
lean
goddess
to
a
certain
point,
but
we've
not
resourced
the
efforts
to
make
it.
A
So
it's
like
a
federated
actual
distribution
right
because
in
reality,
what
we
should
have
is
like
a
manifest.
The
manifest
specifies
specific
versions
that
we
want
to
pull
into
a
distribution,
we'll
call
that
one
sixteen
and
that
would
include
all
the
associated
tooling
required
for
us
to
be
able
to
distribute
a
more
meaningful
distribution
of
like
116,
for
example,
four,
to
make
this
more
concrete,
like
you
could
you
could
see
a
whole
a
sort
of
set
of
pieces
around
the
core
as
being
part
of
something
that
we
would
want
to
release
with
the
core?
A
As
you
know,
in
the
tarball
artifact
itself,
it
could
be,
you
know,
denote
it
as
could
trigger
SIG's
or
something
whatever,
but
to
make
community
is
really
useful
to
the
end
consumer.
That's
really
what
they
want.
They
don't
care
about.
You
know
this
federated
mess.
They
care
about
wanting
to
get
one
thing,
and
you
know
that
the
one
thing
they
have
actually
works
for
that
keep
in
version.
B
B
If
you
have
a
repository
on
github,
you
can
technically
every
talk
when
attack
is
created.
You
can
just
upload
the
artifacts.
You
can
even
afford
deep
and
rpm's
files
that
people
can
just
download
and
install
locally.
But
you
know
this:
is
this
eliminates
the
repository
factory,
but
it's
all
simple
I've
been
playing
around
with
this
and
comparing
this
whole
undergo
the
quad,
build
the
GCD
build.
You
know,
tokens
survive
tokens
and
permissions.
It's
so
complex,
I,
don't
see
the
point,
but
this
you
know
this
signal
is
topic.
Yeah.
A
D
This
yep
working
on
the
first
release
of
me,
one
on
supreme
we're
currently
targeting
the
second
week
of
March.
The
milestone
is
Gordon,
said
to
March
6,
but
that's
a
Friday
so
were
more
probably
to
the
week
after
ideally
2/10
I,
which
is
right
before
the
first
community.
The
community
meeting
on
Wednesday,
the
our
C's
are
happening
right
now,
every
Wednesday.
If
there
is
enough
changes,
our
c2
is
happening
tomorrow.
D
Probably
tomorrow
afternoon
we
had
to
are
see
so
far,
I've
seen
zero
I
was
just
you
not
to
use
it
because
it's
broken,
but
our
c1
folks,
I
have
has
been
really
really
useful
to
to
do
it
because
folks
have
been
bugs,
and
we
have
been
looking
things
with
more
attention,
and
the
last
thing
that
I
want
to
mention
is
that
we
added
support
for
future
gates
which
we're
trying
to
get
like
some
feedback
kind
of
like
with
experimental
features
and
a
lot
of
time.
Folks
wanted
to
do
a
PLC.
D
A
D
We
we
usually
test
our
own
assumptions
because
developers
we
usually
do
the
same
thing
over
and
over
now
like
for
alpha
3,
not
only
we
make
tests
integration
tests
and
much
better,
but
we're
also
trying
to
push
more
releases
more
frequently.
The
other
motivation
to
do
RCS
is
to
give
a
way
for
infrastructure,
bootstrap
providers
living
outside
of
cluster
pr6
to
adopt
each
RC
independently,
because
there
might
be
breaking
changes
between
those,
and
we
don't
want
to
push
things
at
once.
So
folks
can
do
this
incrementally.
C
So
I
have
in
terms
of
cross-project
things
of
interest.
We
had
a
small
snafu
with
the
coordinates
image.
It
turns
out
that,
like
one
of
the
the
tag
wasn't
necessarily
pushed
or
wasn't
pushed
to
the
KCC
area
where
we
were
pulling
from
so
that
was
a
little
bit
of
a
mess
up,
and
then
we
didn't
catch
it
in
cops
and
so
like.
C
And
then
the
other
thing
which
I
think
is
maybe
a
particular
interest
to
Lumiere,
is
we're
trying
to
get
our
releases
to
be
both
automated
in
cloud
bailed
out,
which
I
think
most
people
have
and
reproducible
so
that
like
they
will
produce
the
same
showers.
Every
time
work
hashes
every
time,
and
that's
so
that
when
we
finally
get
the
image
promoter
and
the
non
image
promoter
going
for
in
the
working
group
kits
infra
to
promote
our
artifacts,
we
have
some
way
to
I.
C
Guess
know
whether
a
PR
that
proposes
merging
a
bunch
of
or
updating
a
bunch
of
artifacts
is
in
fact
a
good
one,
because
you
could,
as
an
approver,
rerun
the
build
and
verify
that
you've
got
the
same
shots
on
your
machine
type
thing
without
that
I'm
not
quite
sure
how
we
would
go
about
doing
it.
It
comes
down
to
a
question
of
trust,
but
this
is
more
of
a
reproducible.
Is
a
nice
thing
to
do
if
we
can
get
there
and
we
still
use
Basel,
and
so
it
isn't.
C
B
C
That
is
correct.
That's
what
had
not
happened
because
effectively
they
had
done
a
166
was
had
been
uploaded
was
like
in
kubernetes.
There
was
a
regression
in
there,
so
they
pushed
one
six.
Seven,
someone
in
the
cops
community
noticed
the
regression
and
was
like
well,
we
should
update
167,
but
it
wasn't
done
sort
of
upstream.
So
it
didn't
go
through
that
process
and
I.
Think
if
we
can
get.
This
is
where
the
working
group
Cates
infra
would
come
in
is
that
you
know
we
would
have
a
more
formal
process
to
do
that.
B
F
Yeah
sure,
let's
see
so,
we
we
have
a
one-point
1.8
release
coming
out
later
this
week.
The
big
highlights
there
are,
our
docker
driver
is
going
to
be
more
or
less
stable
and
we
should
have
experimental
support
for
multi
node,
spinning
on
multi
node
clusters.
It
also
has
a
bunch
of
like
updates
in
terms
of
staying
compatible
with
the
most
recent
versions
of
of
kubernetes,
and
you
know
cryo
and
all
that
stuff
as
well.
That's
the
big
highlight,
there's
also
the
finance
improvements.
F
F
It
was
I
mean
a
lot
of
the
code
has
been
under
development
for
a
long
time.
In
a
lot
of
it
was
just
refactoring
our
own
code,
and
it
was
on
a
road
map
for
a
long
time
so
yeah
we
we,
we
are
gonna,
work
more
closely
going
forward.
So
we
won't
have
a
misunderstanding
like
what
happened
last
time,
yeah.
A
I
have
a
long
term
objective
like
the
arc
of
months
tiers
of
developers
in
any
videos
don't
care.
They
just
want
to
have
an
important
library
how
it
work
and
they
don't
care.
It's
been
EQ,
though
care
is
kind.
They
just
want
one
location
for
their
developer
user
story.
Yeah
and
you
know
I-
think
in
the
in
the
fullness
of
time.
I
would
love
for
to
just
put
them
in
a
meat,
grinder
and
grind
them
out
and
have
one
interface
that
works
for
both
scenarios.
A
F
A
G
Yes,
we've
been
Justin
and
I
have
been
working
a
little
bit
to
just
start
on
the
unification
of
the
kind
of
multiple
different
approaches
and
as
we
kind
of
just
starting
on
that
Justin
like
white,
like
the
the
docker
image
machine
image
approach,
I
was
actually
wondering
what
guys
thought
about
that
in
the
circle.
I
don't
know
if
if
they
are
strong
opinions
around
it
or
Marcus,
that
I
think
might
determine
if
we
go
and
try
and
convince
people
to
use
one
approach
or
to
kind
of
support,
multiple
different
input
and
multiple
different
output
options.
G
A
A
C
C
C
A
C
C
G
Okay,
so
we
looks
like
we'll
continue
on
an
approach
that
supports
both
Becca
and
docker
and
works
to
come.
Her
provide
a
unified
interface
to
both
of
those
options
so
that
people
don't
have
to
go
and
look
at
three
different
things.
They
can
look
at
one
thing
and
choose
the
kana
options
that
are
relevant
for
their
context.
G
A
I
think
the
one
lesson
that
we've
learned
as
a
saying
is
that
we
cannot
control
a
unifying
API,
but
we
can
control
the
UX
to
be
unified.
So,
like
a
great
example
here
is,
is
cloud
provider
integrations
to
create
the
cluster?
Api
is
an
example
here
we
can't
make
all
providers
look
the
same
because
they're
intrinsically
different,
but
when
we
do
is
we
can
unify
the
user
experience
so
that
we
reduces
the
cognitive
diverted
to
consume
this.
So
that
way,
the
user
experience
is
unified.
A
B
B
More
subtly,
because
before
was
that,
if
you
somebody
goes
to
the
image
builder
github
repository
right
now,
the
it's
kind
of
difficult
to
understand
what
this
project
currently
supports,
what
is
working
progress
I
think
that
maybe
you
should
place
a
couple
of
paragraphs
there
explaining
what
what
is
currently
being
done.
What
formats
are
planned
to
be
supported.
Things
like
that
I.
G
G
D
D
G
Cli
and
actually
I
think
a
docker
image
as
well.
I
know
from
past
experience
that
ansible
dependencies
and
Packer
dependencies
can
be
quite
fragile,
so
actually
combining
that
whether
docker
image
might
be
also
solve
the
versioning
problem.
Well,
you
can
produce
a
version
of
the
or
pecker
and
ansible
scripts
and
make
that
an
output
that
gets
tested
and
released.