►
From YouTube: Argo CD and Rollouts Community Meeting Mar 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let's
try
this
again
good
morning,
everyone
and
welcome
to
the
March
2023
Argo
CD
and
rollouts
community
meeting
I'm
your
host
for
today
Jesse
a
maintainer
of
the
Argo
project,
and,
if
you
haven't
already,
please
add
yourself
to
the
attendee
list
in
the
meeting
notes,
just
a
reminder
that
the
Argo
project
does
adhere
to
the
cncf
code
of
conduct.
So
please
be
courteous
and
respectful
and
a
reminder
that
this
meeting
is
being
recorded
and
will
be
uploaded
to
YouTube.
A
A
And
finally,
if
you
do
have
any
agenda
items
or
issues,
discussion,
topics
that
you
would
like
to
discuss
during
this
meeting,
please
feel
free
to
comment
on
the
meeting
notes
and
we
may
or
may
not
be
able
to
get
to
them
after
the
the
current
agenda.
A
Okay,
with
that
I
think
we
can
I
can
hand
it
off
to
either.
Is
it
Nick?
Do
you
want
it?
Yeah,
okay,.
B
Great
thanks
so
I
think
we'll
start
off
with
a
quick
round
of
introductions.
B
C
Hi,
can
you
hear
me
yep
hi
I'm
Carlos
Santana
I'm,
a
Solutions
specialist
working
on
AWS
working
with
eks
customers,
I
have
a
background
on
open
source
I've
been
working
also
with
Argo
CD
before
this
I
work
for
it,
with
openshift
and
red
hat,
and
also
a
co-founder
of
K
native,
also
associated
with
K
native
cncf
nice
to
meet
you,
everyone.
D
We
can
go
next.
Everyone,
Nemo
cabione
I'm,
a
principal
architect
at
AWS
I
work
with
our
customers
on
open
source
Solutions,
particularly
on
kubernetes.
So
Argo
is
definitely
one
of
the
interesting
projects
for
a
lot
of
our
customers.
I've
been
connected
to
the
article
Committee
in
the
past,
I've
collaborated
with
with
Ed
Lee
and
Henry
clicks
on
a
number
of
initiatives
before
so
it's
it's
good
to
be
here
and
and
collaborate
with
you
on
this
new
initiative
that
we're
trying
to
product
yeah
good
to
see
you.
So
we.
B
And
I
don't
know
if
Andrew
Lee
was
able
to
make
it,
but
he's
the
other
co-author
on
this
proposal.
B
So
with
those
introductions
out
of
the
way
here,
I
think
I'll
just
jump
into
the
proposal
and
start
working
through
it
feel
free
to
jump
in
if
anything's
unclear
with
it
and
then
we're
hoping
to
field
some
questions
at
the
end
after
getting
through
the
proposal
and
we've
got
a
couple
calls
to
actions
as
well,
so
starting
off
with.
Why
are
we
doing
a
proposal
for
benchmarking
of
Argo
CD
scalability?
B
B
So
users
want
to
understand
what
configuration
tweaks
and
deployment
options
they
have
for
how
far
they
can
push
the
components
of
Argo
CD
and
the
the
resources
that
it's
managing
so
number
of
supported,
applications,
git
repositories
and
manage
clusters.
The
Argo
CD
documentation
has
options
on
how
to
scale.
Ergocd
has
some
guidance
I'm
sure
most
of
us
are
familiar
with
the
high
availability
scaling
up
portion
of
the
documentation
and
those
of
you
that
are
know
that
the
processes
aren't
particularly
clear
and
articulated
in
a
number
of
issues.
B
It's
often
a
point
of
confusion
for
Argo
CD
users.
So
the
the
motivation
here
is
that,
by
running
large-scale
benchmarking
in
an
automated
fashion
that
we
aim
to
provide
the
Argo
CD
Community
with
the
following,
we
want
to
give
confidence
to
organizations
running
at
significant
scale.
That
portion
still
needs
to
be
Quantified
that
Argo
CD
can
support
their
use
case
so
that
and
backing
this
with
empirical
evidence
so
that
they
know
it's
not
just
hand
wavy
yeah,
it
can
scale.
It's
here
are
the
detailed
test
scenarios
that
proved
it.
B
We
want
to
create
clear
guidelines
for
scaling
Argo
CD,
based
on
the
key
scalability
factors
that
we're
going
to
identify
as
a
part
of
this
proposal,
provide
recommendations
for
which
topology
is
best
suited
for
users
based
on
their
needs.
So,
instead
of
users
having
to
go
out
and
discover
the
right
blog
post,
that
explains
their
exact
use
case
in
a
topology
that
fits
it.
B
So
for
for
the
goals
of
this
proposal,
we
want
to
start
off
with
a
standard
set
of
repeatable
benchmarking
procedures
that
objectively
measure
the
limitations
of
Argo
CD.
B
As
a
part
of
this,
there's
going
to
likely
be
a
new
repository,
we're
thinking,
Argo
CD
benchmarking
under
the
Argo
project,
Labs
organization,
that
anyone
can
easily
come
into
and
use
the
benchmarking
procedures
within
that
repo
to
replicate
either
the
existing
test
scenarios
or
be
able
to
tweak
it
to
to
their
use
case
or
a
specific
scenario
that
they're
considering
and
use
this
repo
so
that
there's
a
separate
development
life
cycle.
B
Unlike
the
current
gen
resources
hack
within
the
project,
that
kinda
does
scalability
testing
we'll
move
that
out
into
its
own
repo,
so
that
anybody
interested
in
it
can
go
and
contribute
to
it
and
reduce
the
burden
on
the
core
maintainers
with
it
outside
of
the
main
project
and
then
so.
The
the
idea
is
that
we
have
detailed
test
scenarios
with
these
key
scalability
factors
to
find
so
that
you
can
take
take
them
and
easily
tweak
them
to
test
alternative
scenarios
based
on
maybe
a
specific
hypothesis
that
you
want
to
test.
B
Or
your
use
case
specifically
and
then
now
that
we
have
that
tooling,
we
want
to
create
a
baseline
or
determine
the
Baseline,
for
if
you
take
the
default
configuration
of
our
gocd,
the
Manifest
provided
by
the
project
understand
how
far
you
can
push
that
without
having
to
tweak
the
individual
components
and
and
again
back
that
with
empirical
data
based
on
the
the
testing
procedures
and
then
to
expand
on
that
Baseline.
B
So
specifically,
there
are
three
things
that
we
don't
intend
to
cover
with
this
proposal,
which
is
like
by
understanding
these
metrics
and
the
thresholds
for
them.
There's
some
room
to
do
auto
scaling
based
on
that,
but
we
do
not
intend
to
include
that
in
this
proposal
and
that
would
be
a
separate
proposal
after
we've
done
this
work
to
potentially
Implement
some
Auto
scaling
functionality,
but
we
don't
want
to
add
that
complexity
to
this
existing
proposal.
B
B
We
think
that
it
could
potentially
be
part
of
the
CI
pipeline
to
test
if
the
performance
of
Argo
CD
has
changed,
but
we're
not
going
to
get
into
that
quite
yet,
and
then,
ultimately,
we
have
no
intention
of
analyzing
cost
Implement
Implement
Implement
implications
of
running
different
topologies
and
we're
purely
focused
on
the
technology
perspective
of
of
scalability
performance,
not
how
to
optimize
for
resource
costs
or
anything
like
that
and
then
so.
Ultimately,
this
proposal
comes
down
to
these
steps.
B
B
As
you
can
see,
the
key
scalability
part
factors
are
going
to
be
an
important
part
of
this,
because
every
one
of
those
has
a
different
impact
on
the
the
performance
and
that's
possibly
where
we're
going
to
need.
The
most
help
is
determining
which
ones
that
we
should
care
about
and
include
in
the
testing
scenario.
B
B
B
So
a
one-to-one
Argo
CD
one
to
one
with
clusters
versus
Argo,
CD,
managing
multiple
clusters
and
the
impact
of
that
and
then,
ultimately,
through
all
of
that
work,
we
can
come
out
of
it
with
thresholds
for
the
metrics
that
identify
when
performance
is
going
to
be
impacted
by
one
of
these
key
scalability
factors
and
then
ideally
contribute
back
to
the
project.
B
Grafana
dashboards
with
built-in
thresholds
and
potentially
alerting
for
Prometheus
or
alert
manager.
That
sort
of
thing
the
so
the
the
key
to
you
use
cases
that
we're
going
to
cover
are
essentially
the
the
two
different
topologies.
So
we're
going
to
measure
Argo
CD
in
a
cluster
versus
Argo
CD,
managing
external
clusters
and
the
core
difference.
B
When
you
go
to
run
these
benchmarks
and
potentially
we're
going
to
try
simulating
clusters
and
nodes
using
either
V
cluster
or
k,
walk
or
using
a
real
cloud
provider
with
a
real
cluster
Fleet
in
the
end.
But
the
idea
is
that
you
can
take
these
same
Benchmark,
tooling,
and
just
change
out
the
back
end
to
to
test
your
specific
scenario
and
I.
Think
that
is
the
the
end
of
The
Proposal
as
a
whole.
Carlos
and
Nema.
Is
there
anything
that
I
didn't
cover
that
you
think
we
should
include.
B
You
thank
you
so
coming
out
of
this
we're
interested
in
I,
I
guess.
B
One
of
the
goals
that
isn't
explicitly
stated
here
is
that
I'm
interested
in
forming
a
a
Sig
focused
on
scalability,
so
that
we
can
meet
on
a
regular
Cadence
and
continue
working
through
this
proposal
through
that
that
special
interest
group
and
we're
interested
in
any
other
organization
or
users
that
either
have
experience
running
at
scale
and
have
hit
limitations
that
can
help
provide
some
more
Direction
on
this
or
would
like
to
contribute
and
the
actual
tooling
that's
going
to
support
these
test
cases
or
a
benchmarking,
tooling.
C
Yeah
one
one
thing
to
add
on
this:
from
our
our
points,
we
said
that
we
want
to
run
this
develop
The
Benchmark,
to
make
it
super
easy
to
to
run
anywhere
mentioned
V
cluster,
but
we
it's
intended
to
support
resources
and
and
Cloud
running
eks
clusters,
specifically
because
we
have
a
lot
of
customers
asking
us,
for
how
can
they
run
are
go
see
the
introduction
how
it
scale.
C
So
we
want
to
get
those
answers,
so
we
are
we're
happy
to
collaborate
and
also
collaborate
with
resources
that
we
can
spin
up
a
large
clusters
or
multiple
clusters
and
then
report
back
with
the
instrumentation
but
happy
to
see
other
Cloud
providers.
Other
private
companies
also
contribute
that
we
can
run
these
benchmarks
everywhere.
On
the
article
can
see
the
Argo
CD
can
run
even
on
the
edge
right
yeah.
A
Yeah
so
yeah,
first
of
all,
thank
you
for
taking
tackling
this
issue.
It
like
we
always
get
asked.
A
You
know
how
well
does
Fargo
CD
scale
and
such
a
loaded
question
like
there's
so
many
dimensions,
I
did
I.
Think
you
got
a
good
starting
list
like
I,
think
the
the
dimensions
that
you
outline
like,
first
of
all,
at
a
high
level:
okay,
applications
clusters,
git
repositories
and
then
within
each
of
those
level
categories.
There's
like
okay,
how
about
number
of
ABS
the
size
of
your
each
application,
the
churn
on
those
individual
applications
and
then
the
Clusters
same
thing
right
number
of
clusters.
A
What
is
the
size
of
the
cluster
and
then
like,
where
there's
a
churn
on
those
mono
repo
versus
multi-repos
different
thing.
But
the
one
thing
I
think
you
may
also
want
to
consider
is
the
the
tooling.
So
it's
a
big
difference.
A
For
example,
if
you
are
deploying
raw
manifests
right
versus
invoking
a
templating
tool
and
then
within,
if
you're,
using
customize
us
to
even
have
your
weight
than
say,
Helm,
template
and
or
maybe
you
just
want
to
eliminate
those
from
the
the
measurement
and
just
use
the
the
fastest
one,
which
is
just
raw
yaml
right.
A
It's
it's
another
consideration
to
another
dimension.
I
can
say
that
it
affects
the
performance,
because
you
know
if
your
tooling
is
expensive,
it
may
or
may
not
be
paralyzable,
and
but
it
the
easier
thing
made
it
just
be
to
this
all
assumes
just
a
raw
yaml,
I
think
just
for
Apples
to
Apples
comparison.
That's.
C
That's
yes,
it's
very
great
input.
Yeah
I!
Think
that
that's
something
that
we
can
add.
That's
that's
the
reason.
It's
a
proposal
right
to
get
input,
so
the
the
next
level
is
like
measuring
those
right.
You
said
that
Helm
was
faster
or
customer
was
faster
than
the
other
one
I
didn't
know
that
yeah.
C
So
that's
the
idea
to
highlight
this
to
the
community,
so
anyone
has
has
the
data
right.
That
knows
has
the
data,
so
we
can
make
sure
we
can
develop
a
tool
to
to
run
that
permutation
right.
A
Exactly
yeah
so
like
I
guess
the
ideal
outcome,
for
me
at
least
for
this
is
like
okay.
Next
time
someone
asks
like
can
I
go
CD
handle
this.
We
could
accept.
You
know
inputs
to
the
tool
that
says
like.
Oh
how
many?
What's
your,
how
many
applications
you
plan
on
doing
in
like,
like
all
these
I
mean
clusters
like
all
these
kind
of
inputs,
are
you
using
customize
or
other,
and
then
these
can
maybe
output
some
recommendations
or
like
or
at
least
show
what
was
tested?
A
C
Yep
yeah,
that's
that's
fair
and
I
think
the
end
goal
we
we
have
is
basically
put
it
out
because
it's
it's
a
it's
a
it's
a
starting
point.
So
there's
things
that
we
said:
no,
no.
We
put
in
a
in
that
no
goal
section.
C
For
example,
like
you
said,
if,
if
people,
ideally
people
don't
have
to
tweak
or
tune
these
things
automatically
Tunes
itself
with
auto
scaling
right
in
all
dimensions
and
which,
but
we
put
smarters,
we
put
enhancements
in
argosity
to
actually
Auto
scale
right
and
auto
tune
itself
based
on
a
workload
but
but
yeah
with
that
non-go.
For
now,
just
let's
find
out.
How
can
we
stress
test
it
now
and
then
have
it
have
a
a
some
some
of
the
tooling
that
that
can
do
the
automation.
C
Because
we
also
saw
some
recent
blog
posts
from
Dan
from
Curry
go
fresh
I
ping
him
also
on
this
proposal.
He
did
a
blog
post
recently
on
scaling,
Argo
and
then
June
Juan
did
a
cool
blog
post
recently
on
medium
of
100
10
000,
Argo,
CD,
apps,
he's
from
IBM
research,
so
just
inviting
folks
that
are
in
this
space
and
they're
doing
work
so
they're
putting
blog
posts
Also.
Let's,
let's
create
a
a
Sig
work
together.
I
don't
know
if
they
want
to
comment
or
somebody
else
wants
to
comment.
D
E
It
was
other
bottlenecks
that
came
before
it,
so
we
we
also
have
the
desire
to
want
to
be
able
to
scale
Argo
as
far
as
we
can
and
yeah
exactly
you
got
it
so
then
you
know
beyond
that
is
once
you
get
to
the
point
where
kubernetes
is
involved,
that's
where
kcp
Edge
takes
over,
but
I'm
not
going
to
steal
any
Thunder
here.
E
What
I
wanted
to
ask
a
question
about
Carlos
and
I
have
worked
in
before
in
the
past,
which
is
great,
so
I'm
glad
to
see
him
here
in
this
forum,
and
thanks
for
you
know,
thinking
of
us
the
auto,
if
you
could
scroll
up
just
a
bit
here.
What
exactly
do
you
intend?
You
know?
I
know
you
just
touched
on
it
briefly,
but
the
the
auto
scaling
enhancements
is
that
is
that,
provided
that
the
platform
supports
it,
or
are
you
thinking
of
other
things
that
are
outside
the
platform
to
do
it?
B
Yeah
I
think
that's
why
we
include
it
as
as
a
non-goal,
because
we
haven't
gone
into
much
depth
on
how
we
think
we'll
actually
Implement
any
sort
of
Auto
scaling,
because
that
does
deserve
its
own
proposal
is
like.
Do
you
assume
the
platform
is
providing
those
components
or
is
it
part
of
Argo,
CD
and
and
all
of
that?
So
it's
a
great
question.
C
And,
for
example,
something
I
recently
learned
from
Argo
CD
is
like
I,
increasing
the
normally
a
replicas
for
the
controller,
application,
controller
and
sharding
based
on
the
cluster
name,
and
then
there's
issues
out
there
saying
like
it
worked
for
me.
It
doesn't
work
for
me.
How
well
is
the
charting
that
that
area
right
can
we
improve
that?
Can
we
make
it
extensible,
maybe
but
I
think
the
first.
We
didn't
want
to
make
this
proposal
scary.
That
then
no
one
joins
the
Sig.
No.
E
F
I
I
had
a
question
as
well,
so
first
thanks
thanks
for
this
thorough
introduction
to
to
the
topic
and
I
I
think
it's
very
important,
a
very
important
thing
to
to
have
this
documented
right
and
that
the
community
knows
how
to
scale
and
and
how
far
obviously
actually
is
scalable.
F
I
I
was
wondering
if,
if,
if
you're
intending
to
create
a
sick,
if
it
should
also
include
maybe
reliability
in
the
same
term
right
so
like
reliability
and
resilience,
someone
mentioned
the
edge
previously
right
so
and
and
oftentimes
you
have
like
flaky
Network
between,
like
especially
in
a
happen,
spoke
model
right.
If,
if
that
should
be
scoped
in
there
or
if
it's
too
early
for
that
I,
don't
know
it's
just
thought.
B
It's
a
good
yeah
I
hadn't
considered
that
for
the
for
the
scalability
Sig,
it's
a
good
question
on
on
like
if
the
Sig
is
going
to
exist
long
term
I
think
it
should
definitely
include
that,
but
in
the
short
term,
I
think
we're
really
trying
to
scope
ourselves
into
the
benchmarking
tooling
for
for
right
now,
so
yeah.
G
D
To
add
new
parameters
in
the
focus
is
to
get
something
out
of
the
door
that
can
help
us
objectively
communicate,
and
you
know,
scalability
of
Argo
and
I-
think
that's
one
of
the
biggest
blockers
that
we
have
when
we
come
to
customers.
It's
always
you
know
to
to
the
earlier
points
that
was
made.
It's
always
a
hand,
wavy
response
that
we've
had
to
this
point
for
Argo
scalability
at
least
now
we
can
point
to
some
numbers,
and
you
know
something:
yeah
yeah.
F
I
I
know
I,
it's
it's
a
question.
That's
asked
very
often
right
so
I
I
know
yeah.
G
G
The
couple
of
points
I
had
and
I
think
Diana
actually
had
the
same
comment
on
the
pr
that,
given
the
fact
that
the
setup
is
not
going
to
be
very
close
to
what
a
real
world
setup
would
be,
The
Benchmark,
the
usefulness
of
The
Benchmark
results
might
be
impacted,
but
I
know
we
will
want
to
start
small.
So
it
makes
sense.
G
What
we
should
probably
do
is
have
a
section
in
the
proposal
where
we
Mark
some
of
these
things
that
we
know
our
potential
areas
of
improvement
that
we
should
basically
eventually
work
on
eventually
as
part
of
the
work
group.
Just
so
that
some
of
this
feedback
doesn't
get
lost
over
the
audio
conversation.
G
We
are
having
the
other
thing
which
I
think
is,
since
we
are
having
this
I
think
and
it's
up
to
the
project
leads,
but
it
makes
sense
for
us
to
be
able
to
run
these
Benchmark
tests
prior
to
release,
to
understand
where
we
stand
after
a
bunch
of
changes
have
gone
in
I.
Think
other
than
that.
This
looks
great
I
added
a
topology
suggestion
in
the
PR,
but
I
don't
need
to
speak
out
a
lot,
but
thank
you
very
much.
This
is
really
exciting.
D
Yeah
I
mean
totally.
It
probably
is
a
good
idea
to
run
these
tests.
It's
not
part
of
the
proposal,
but
I
think
if
you
want
to
run
the
benchmarks.
As
part
of
you
know
the
release
process,
it
probably
needs
to
be
a
scaled
down
version
of
it
right,
because
the
intention
for
the
particular
benchmarks
that
we
put
out
is
to
push
the
limits
and
I.
Don't
necessarily
think
that
you
don't
push
the
limits
of
logo
every
single
time
and
cut
your
face.
G
Right
I
mean
we
like
there's
a
whole
cost
associated
with
it
sure,
and
we
need
infrastructure
need
somebody
to
donate
infrastructure
for
us
to
be
able,
to,
let's
say,
spin
up
hundreds
of
nodes
and
run
a
benchmark
test.
So
it's
definitely
expensive
and
the
cost
should
be
worth
to
worth
it
so,
which
is
why
I
think
it's
probably
valid
to
put
it
as
a
line
item,
but
definitely
not
something.
I
expect
us
to
be
able
to
expand
upon.
While
we
start
this
effort,
yeah.
D
Yeah
and
I
think
the
reason
for
those
same
cost
reasons
and
the
scale
reasons.
This
was
one
of
the
areas
where
we
thought
AWS
couldn't
help
because
we're
hoping
to
be
able
to
you
know,
fund,
you
know
all
the
other
scalability
requirements
and
see
how
far
we
can
push
it
right.
So
I,
don't
exactly
know
the
numbers.
You
know
in
terms
of
how
many
clusters,
but
we
try
to
get
as
much
funding
as
possible
to
push
the
limit
so
not
the
number
of
Plus
yeah.
A
All
right
thanks,
everyone,
so
I
think
we
can
move
on
to
the
second
topic.
So
we
have
a
kcp
Edge,
someone
from
that
Community
presenting
on
it.
That's
it.
Oh.
I
A
Right
go
ahead.
Take
it
away.
E
You
yeah,
let
me
see
if
I
can
get
my
browser
looking
here
so
yeah,
kcp
Edge
is
a
project
that
we
started
a
little
bit
ago
back
in
November,
we
were
hatched
and
we
started
as
a
sub-project
of
kcp,
which
does
not
map
back
to
kubernetes
control,
plane
purposely,
but
you
would
guess
that
from
the
acronym
itself,
the
idea
here
is
is
that
kubernetes
itself
has
some
limitations.
E
Well,
quite
a
few,
but
you
know
it's
good
for
the
majority,
but
we
find
that
at
the
edge
and
specifically
what
we
we
determine
to
be
an
education
which
is
anything
that
can
speak
Coupe,
Coupe
style,
API
and
transmit
spec
and
Status,
and
so
that
could
be
something
as
small
as
microshift.
Maybe
it's
k8s
K
zeros
I
mean
sorry,
not
caves,
K
zeros.
Maybe
it
could
be
something
as
big
as
a
multi-cluster
multi-node
cluster
or
it
could
be
like
a
single
node
openshift.
E
So
anything
and
everything,
every
variation
and
flavor
of
something
that
speaks
kubernetes
is
what
we
consider
to
be
an
edge
location.
And
so
you
might
be
saying
you
know
well
that
doesn't
cover
devices
what
we
typically
call
The
Edge.
Well,
they're
getting
smaller
and
smaller
to
the
point
where
they
can
be
used,
as
you
know,
gateways
to
provide
access
to
those
locations
or
to
those
individual
devices
and
they
can
even
fit
on
those
devices
is
now
so
we're
getting
closer
to
that
point
where
that
inflection
is
happening.
E
So
what
kcp
edges
is
concerned
with
is
what
kcp
brought
to
the
stage
with
transparent
multi-cluster.
It's
a
way
to
synchronize
workloads
at
Edge
locations
right
using
placement
as
a
a
declared,
a
placement,
and
so
what
we
found
is
the
limitations.
There
is
we
they
had
this
one
to
any
notion
where
you
could
put
one
workload
out
there
and
it
would
go
to
any
node
in
a
cluster.
Well,
that's
not
enough
for
us.
We
wanted
to
go
to
many.
E
So
we
have
this
idea
of
Distributing
workloads,
one
to
many
out
to
individual
Edge
locations,
and
once
we've
got
that
accomplished,
of
course,
then
we
want
to
be
able
to
be
resilient
enough
to
survive,
disconnected
operation,
and
so
that's
another
area
that
we're
actually
considering
and
working
in.
So
what
happens
when
you
lose
connection
to
kubernetes?
You
know
when
you're
in
a
hub
and
spoke
model
so
to
speak
and
the
spoke
gets
disconnected
well.
E
The
spoke
may
be
completely
offline,
but
it
should
be
able
to
continue
operation
and
that's
where
we
think
there's
work
to
be
done,
and
so
we've
got
a
whole
host
of
information
out
here
on
kcbedge.io
we've
put
together
the
community
recently
and
our
GitHub
and
all
the
slack
Channel
information
is
located
here
and
more
recently,
we've
started
doing
a
little
bit
more
Outreach
into
the
community.
E
E
Here
on
the
call
with
me
who
wrote
that
article
about
Argo
CD
scale
he's
actually
working
in
the
area
of
the
edge
scheduler
and
and
designing
the
next
wave
of
what
it
looks
like
to
have
one-to-many
distribution
of
workloads
at
the
edge
and
we're
also
working
on
things
like
status
summarization.
E
E
We
can
then
scale
that
back
with
things
like
predicates
and
filters
and
other
things
that
can
be
introduced,
like
other
other
ecosystem
players
in
the
cncf
that
are
concerned
with
understanding
placement.
Things
like
you
know
how
to
avoid
places
where
cves
are
located,
you
know,
are,
are
in
play
how
to
avoid
places
that
are
maybe
more
expensive
in
terms
of
carbon
footprint,
et
cetera,
et
cetera.
So
this
has
bigger
implications,
and
then
you
know
the
last
thing
I'll
say
here
before
moving
you
know.
E
Handing
back
the
presentation
is
that
we're
also
looking
at
you
know
what
it
takes
to
scale
kubernetes,
API
Machinery
in
so
much
that
you
probably
are
familiar
with
kind
as
an
interface
to
other
parties
like
postgrads
or
SQL
database,
relational
databases
that
can
support
or
be
the
back-end
Object
Store
for
a
case
for
kubernetes.
E
It's
a
bigger
project,
it's
a
much
longer
Arc,
but
I
thought
it
would
would
serve
us
well
here
to
talk
about
it,
because
when
you
get
to
a
certain
point
with
kubernetes,
of
course
you
know
SCD
becomes
its
Achilles
heel,
and
so
we
believe
that
you
know
exploring
doing
experimentation
in
this
space
is
going
to
lead
to
Big
leaps
and
bounds
in
terms
of
how
far
out
the
scale
can
be
accomplished.
So
I
think
it'll
support
where
you're
going
here
with
the
Argo
project.
E
So
that's
it.
That's
all
I've
got
so.
This
is
the
open
invitation.
It's
on
kcphedge.io
we've
been
in
LinkedIn
here
and
there
we're
doing
a
lot
of
blogging
on
medium
there's
a
whole
host
of
information
out
there
I'll
give
you
that
real.
I
E
And
then
I'll
and
I'll,
and
then
I
will
open
it
up
for
anybody
who
has
any
questions
here.
We
are
kcb
Edge,
medium
and
our
reading
list,
and
these
are
a
whole
bunch
of
other
articles.
We've
even
done
some
experimentation
with
GPT,
as
you
know,
like
it's
all
the
rage
now
with
chat
GPT,
where
we
can
actually
build
customizations
much
like
what
you
would
be,
what
you'd
be
used
to
with
customized
using
GPT
to
do
The
Replacements
in
each
of
the
the
declarative
apis.
E
A
So
so
maybe
can
you
tell
me
how
you
see
kcp
Edge
tying
into
Argos
CD,
like
so
kcps
I
understand
it's
like
almost
like
a
proxy
controlled
planar,
meta
control,
plane,
yeah
and
Argo.
Cd
just
cares
about
control
planes
like
I,
guess,
maybe
there's
some
right
all
right,
I,
don't
know.
Maybe
you
can
explain
no.
E
That's
a
really
good
point
and
I'm
glad
you're
scratching
your
head
on
it
because
it
really
shouldn't
be
a
consideration
for
you
right.
You
know
that
you
work
with
kubernetes,
so
therefore
you'll
work
with
kcp
and
therefore
you'll
work
with
kcp
Edge.
So
on
the
Northbound
side,
there's
really
no
change
at
all.
I
mean
obviously
you're
able
to
take
whatever
CRS
that
we
hand
over
you'll
be
able
to
work
with,
let's
say
ACM
or
ocm
and
manifest
work
same
will
be
true
with
us,
with
Edge
scheduling,
Edge
placement,
placement,
slices
and
so
forth.
E
E
So,
as
you
start
to
move
in
the
direction
of
hey,
I
want
to
be
able
to
scale
to
to
handle
millions
of
edge
locations,
but
you
know
that
for
each
application
set
that
you're
going
to
be
sending
down,
there's
more
than
just
one
object
per
location
right
and
then,
if
you've
got
multiple
applications.
Now
you
get
even
a
that
magnitude
even
multiplies
more.
A
Okay,
so
I
I
can
see
actually
how
that
might
benefit
the
control
plane.
Argo
CD
itself
is
running
in
as
as
well
so
because
all
all
we
need
kubernetes
for
is
actually
the
store
application,
CRS
sap
projects
right,
it's
just
metadata
to
us.
That's
right
and
you
know
a
lot
of
times.
A
You
don't
need
a
full-blown
cluster
like
workload
cluster
for
for
that
purpose,
so
I'm,
actually
that's
that's
actually
an
interest
to
me
as
well
like
how
can
we
optimize
the
use
of
just
a
generic
control
plane
for
the
purposes
of
metadata
storage
and
even
persist
like
better
persistence,.
E
Yeah
you're
right
on
top
of
it
right,
you
hit
it
right
on
the
head,
is
I
I,
don't
need
a
whole
cluster,
for
this
I
can
kcp
in
and
of
itself
is
just
a
binary.
Okay,.
H
E
Lightweight
and
then
once
you
get
the
attached
to
the
object
store
to
it,
of
course,
at
CD,
then
you
are
able
to
scale
it
to.
However,
many
number
of
objects
that
you
need
up
until
the
point
where
the
raft
protocol
falls
apart,
right,
yeah
and
then
now
you
now
you've
got
this
event.
This
consistency
issue
that
starts
to
take
place.
So
that's
what
we're
we're
going
at
that
head
on
right
is
we're,
saying
you
should
be
able
to
use
any
relational
database
on
the
back.
You
know
like
look
at
cockroach.
E
It
has
its
own
time
series.
It
offloads
a
lot
of
the
areas
of
concern
in
places
where
SCT
had
to
make
up
for
not
having
native
database
interfaces.
You
can
find
those
in
Native
databases,
so
why
not
use
them?
So
we
really
have
to
untangle
how
NCD
is
intertwined
with
the
API
Machinery.
In
order
to
do
that.
So
that's
why
I
say
it's
a
bit
of
a
longer
Arc,
this
and
and
thing
but
we're
looking
for
people
now.
E
You
know
to
get
in
on
the
ground
floor
and
help
us
figure
out
how
to
how
to
fix
that.
We
we're
we're
investing
in
it,
we're
going
to
take
the
time
to
do
it,
and
we've
got
some
shorter
term
projects
that
kcp
edge.
Will
you
know,
get
us
in
the
middle
of
the
conversations
that
we
need
to
be,
but
that's
a
longer
term
one
that
I
can
see
people
here
benefiting
from
so.
A
So
with
kcp
be
a
alternative
shim
layer
to
kind
is
that
the
or.
E
Yeah,
that's
exactly
right
right,
so
you
wouldn't
use
kind
anymore
right
and
you
wouldn't
stop.
We've
we've
done
some
testing
on
that
much
the
same
as
we
did
with
Argo
CD
for
bottlenecks
right,
and
so
you,
you
also
you
know
relying
on
a
third
party
you're
going
to
incur
some
kind
of
overhead,
and
so
we
found
that
we
haven't
published
it
yet.
But
the
idea
here
is:
is
that
going
forward?
E
We
think
that
a
native
interface
would
be
much
more
suitable
because
you're
going
through
double
the
translation
for
certain
supportive
certain
operations
right
think
about
what
NCD
needs
to
do
in
order
to
to
do
time,
series
data
and
then
add
that
on
top
of
and
then
going
through
kind
and
then
into
postgres
yeah.
A
I
I
E
All
different
places
in
the
code
where
that's
where
that's
intertwined
internet.
E
Right
yeah,
sorry,
yes,
I
meant
traditional
database,
not
not
anything
necessarily,
but
but
that
doesn't
mean
it
can't
be.
So
you
know
why
not
I
think
that
that's
actually
a
really
a
really
interesting
idea.
I
think
we
should
consider
it.
A
A
lot
of
the
work
have
to
be
that
entanglement
that
you
mentioned
that's
actually
Upstream
work
in
the
kubernetes
code
base
that
has
to
right
good.
E
Point
so
right,
so
kcp,
Edge
being
a
sub-project
of
kcp,
our
purview
into
kubernetes,
not
direct,
but
we
believe
over
time
we're
getting
ourselves
positioned
in
such
a
way
that
the
the
things
that
kcp
edge
and
kcp
introduced
like
transparent
multi-cluster
and
this
one-to-many
concept,
we're
hoping
that
enough
people
will
find
Value
in
it
that
it
gets.
You
know
that
we
established
that
Upstream
relationship
with
the
the
Sig
API
Machinery
folks,
and
then
at
that,
at
which
point
we
would
you
know
that
would
also
open
that
conversation.
E
We
haven't
decided
whether
it's
worthy
of
creating
a
new
community
just
for
that
purpose,
but
we
think
it
can
fall
within
the
the
bounds
and
the
permission
where
we're
operating
under
kcp
Edge
we're
working
on
it.
But
it's
a
very
subtle
point,
but
it's
a
good
one
to
bring
up
thanks
for
the
question,
we're
still
developing
that.
C
Interesting
so
enough,
kcp
has
enough
API
Machinery,
let
me
say
refresh
it:
kcp
will
have
enough
just
enough
API
Machinery
to
be
able
to
support
kubernetes
controllers
like
Dragon
City
or
basically,
the
Informer
pattern
and
the
caching
and
and
so
on,
and
the
and
the
different
crowds.
But
not
you
don't
have
to
have
everything
just
enough
right,
be
a.
E
Machine
right,
so
that's
another,
really
interesting
aspect
of
kcp
is
that
they
have
a
notion
of
or
a
API
exporting
and
binding,
and
so
what
that
allows
you
to
do
is
create
a
workspace.
They
call
them
workspaces,
not
name
spaces
right.
So
workspaces
are
very
low
footprint
versions
of
namespaces.
If
you
will,
they
don't
rely
or
require
a
whole
big,
back-end
infrastructure
to
support
them.
E
So
in
other
words,
you
no
longer
need
the
overhead
of
the
controller
and
everything
else.
It's
just
this.
The
interfaces
get
passed
around
as
objects.
So
it's
not.
You
don't
have
to
build
out
these
big
bulky
control
planes
in
order
to
support
systems
like
Argo
and
other
players
in
the
ecosystem.
A
C
E
You
know
I,
look
at
it
this
way,
Carlos.
We
got
a
Blocker
in
front
of
us
right
and
we're
behind
right.
So
we're
always
trying
to
look
for
a
use
case
that
we
can
put
in
front
of
us.
We
just
recently
had
one
from
the
folks
from
Lumen
and
avasa
out
at
one
of
the
edge
demo
days.
Brian
Chambers
who's
part
of
Chick-fil-A
has
become
a
huge
proponent
of
have
you
ever.
E
Have
you
all
seen
the
restaurant
compute
platform
yep,
okay,
so
yeah,
so
he
they
had
a
round
table
with
Lumen
and
navasa
and
they
talked
about
just
this
right.
You
got
these
heterogeneous
endpoints
book
Edge
locations
as
we
call
them
and
you've
got
kubernetes
in
the
middle,
and
you
just
simply
can't
get
to
it,
because
you
can't
get
to
that
scale
and
you
don't
want
to
do
it
all
manually
and
so
I
said
well.
This
is
great.
This
is
our
poster
child.
This
is
what
we
need
to
stand
behind.
E
All
right,
that's
all
I
had
for
you
thanks
so
much
for
all
your
questions.
If
anybody
has
anything
else
they
want
to
discuss,
you
know
my
link
is
all
my
information:
how
to
contact
me
or
anybody
in
the
project
is
in
that
is
in
that
invitation,
and
so
you
know-
and
we
have
good
first
issue
list.
E
Yeah
we've
worked
hard
to
call
to
curate
and
we've
got
a
lot
of
traction
there
too,
but
a
few
people
have
joined
us
along
the
way
and
contributed
so
we'd
love
to
see
you
there
if
you
can
and
and
we'd
love
to
contribute
back
to
Argo
project.
So
thank
you.
A
Yeah
no
thank
you.
I've
been
following
actually
the
kcp
project
in
the
background,
but
it's
actually
I
think
I'm
gonna
start
to
take
a
deeper
look
into
it,
but
yeah.
Thank
you.
D
So
I
have
I
have
two
quick
questions.
One
is
related
to
The
Proposal
that
we
put
out
so
I
I
think
we
kind
of
proposed.
You
know
serious
scalability
as
part
of
this,
and
what
are
the
steps?
If
you
want
to
go
through
the
approval
process,
is
it
something
that
is
going
to
be
decided
later
on,
or
are
we
deciding
about
it
now
or
how
does
that
work.
A
Good
question
I
think
I
mean
we,
the
other
sigs,
that
we
have.
We
have
a
marketing.
So
again
we
have
a
user
interface
Sig
and
it
wasn't
any
I
think
we
just
gather
enough
Forum
or
are
people
interested
in
and
just
started
those
things
and
started
scheduling
meetings.
So
I,
don't
think,
there's
a
necessarily
a
formal.
G
I
can
so
I
think
there's
one
thing
that
so
basically
for
a
Sig.
We
try
to
ensure
that
the
proposal
goes
out
to
the
Argo
Pro,
slash,
Argo,
project
proposals
and
I.
Think
that
should
be
in
place
for
the
Sig,
as
for
I
think
having
an
argue
project,
Labs
project
I
think
that
also
is
a
PR
that
has
to
go
into
Argo
project
proposals.
But
you
know
it's
totally
fine
to
have
this
one
as
well
for
wider
visibility
because
of
the
work
that's
happening,
but
typically
for
a
new
Argo
project
lab
projects.
G
The
proposal
has
to
go
into
Argo
Pro,
slash,
Argo
proj
and
that's
apply
same
for
the
Sig
as
well.
You
could
potentially
have
this.
Have
those
both
in
the
same
request
itself.
D
Gotcha,
okay,
so
I
think
between
between
the
class
and
us
and
the
rest
of
the
the
leads.
We
can
probably
sort
that
out
right.
Moving.
D
Gets
a
little
bit
more
detailed
because
I
realized
that
June
Duan
is
on
the
call
and
I
read
his
blog
post.
So
I
had
like
a
scalability
question
on
the
blog
post,
and
hopefully
it's
not
too
boring
for
people.
But
I
was
wondering
June.
If
you,
if
you're
still
on
the
call,
are
you
still
on
the
call
or
are.
I
D
Hi
so
when
I
read
the
blog
post
on
the
scalability
stuff,
I
realized
that
you
know
at
one
point,
you
tried
to
scale
the
repo
controller
to
three
instances
right,
and
you
showed
some
of
the
benefits.
Some
of
the
gains
you
didn't
try
to
scale
the
app
controller
as
part
of
the
scalability
efforts
and
I'm
kind
of
curious.
D
Why
was
it
because
it
wasn't
necessary?
You
were
primarily
working
with
the
git
repositories
or
was
there
any
other
reason
behind
it?.
I
I
C
Is
it
now
in
master
right,
the
latest
release
you're
able
to
maybe
a
question
for
Nicolas
or
somebody
from
Argo
I.
H
C
A
That
that's
been
around
for
a
while
now
and
even
the
the
ability
to
sign
charts.
So
a
lot
of
people
don't
realize
that,
yes,
you
can
one
bump
up
the
number
of
replicas
of
the
stateful
set,
but
that
actually
gives
you
a
I,
will
say
kind
of
like
a
random
distribution
which
is
not
also
ideal
and
then
for
the
really
Advanced
users
like
if
they
can
assign
charge
based
on
the
cluster
number
of
resources.
That's
also
possible
and
I
think
it's
been
possible
for
since,
like
2-4
I
think.
D
I
Actually,
yeah
I
need
to
check
the
date
of
my
experiment.
It's
it's
sometime
in
2022,
it's
in
the
summer
2022,
so
I
do
notice
that
that
proposal
to
for
starting
of
the
other
Studio
applications,
I
didn't
write
that
in
the
article,
but
at
that
time
I
don't
really
have
that
option.
H
D
Yeah,
okay,
I
think
we'll
figure
it
out.
A
A
There's
even
a
there's
a
CLI
to
to
help
with
this.
It's
called
Argo,
CD
admin,
I,
think,
plus
their
shards
yeah
I'll
go
see
the
admin
cluster,
shards
and
I
think
it
will
actually
show
you
some
numbers
about
like
how
many
resources
are
use
per
cluster,
which
kind
of
feed
into
like
the
assignment.
I
do
think
that
the
assignment
should
be
working
because
I
feel,
like
I,
recently
used
it
as
well.
So,
okay,
if
it's
it
should
if
it
doesn't
work,
it's
a
bug.
D
Cool
okay
I
also
realized
that
Andrew
Lee
from
our
team
joined
is
going
to
be
helping
with
the
implementation
work.
So
thanks,
Andrew.
F
Okay,
that's
all
we
got.
Thank
you
very.
A
All
right,
if
not
thanks
everyone
for
attending
the
today's
meeting
and
we'll
see
you
again
in
a
month
right
thanks.
Everyone.